(navigation image)
Home American Libraries | Canadian Libraries | Universal Library | Community Texts | Project Gutenberg | Children's Library | Biodiversity Heritage Library | Additional Collections
Search: Advanced Search
Anonymous User (login or join us)
Upload
See other formats

Full text of "Optimizing service capacity in the drug information service"

OPTIMIZING SERVICE CAPACITY IN THE DRUG INFORMATION SERVICE 



By 
DANIEL LEE HALBERG 



A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL 

OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT 

OF THE REQUIREMENTS FOR THE DEGREE OF 

DOCTOR OF PHILOSOPHY 



UNIVERSITY OF FLORIDA 

1998 



Copyright 1998 

By 

Daniel Lee Halberg 



This manuscript is dedicated to my two role models, who taught me that there are 
no limits to the dreams I can achieve. I will forever miss you both. 

To my grandfather, the late James W. Carrig, I would like to thank you for being a 
father figure and role model at a time in my life when there were few heroes to choose 
from. You always had a smile on your face and a joke to tell. You taught me that 
happiness is built on hard work and a loving relationship with family and friends. 

To my greatest teacher, the late Kent Harriman, I would like to thank you for 
being my friend and "coach". Your peerless gift for teaching was a great loss to the 
world. I am sorry that you can not read this, because I think you would have appreciated 
this accomplishment. I feel that everything that I am now started with those first steps I 
took into your classroom. Thank you. 



ACKNOWLEDGEMENTS 

First, 1 would like to express my love and appreciation for my wife, Sara A. 
Halberg. Without her, I doubt I would have had the courage and discipline necessary to 
achieve this goal. I will always thank God for making her my wife. I am looking forward 
to all of our new adventures together. 

Second, I would like to extend my sincere and heartfelt gratitude to my major 
advisor, Dr. Charles D. Hepler. I thank him for his insight, confidence, and humor. Third, 
I would like to thank the rest of my committee, Drs. Richard Segal, Earlene Lipowski, 
Barney Capehart, and Ralph Swain for their interest, direction, and advice. Without their 
guidance, this project would not have succeeded. 

Fourth, I would like to thank the faculty, the staff, and graduate students of the 
department of Pharmacy Health Care Administration for their friendship, advice, help and 
encouragement. However, I would like to extend a special thank you to DeLayne 
Redding for all her help and concern. 

Finally, I would like to thank my entire family, especially my mother, Patricia 
Halberg, and my two sisters, Darcy and Heather, for their love and support. I bet they 
thought I would never graduate! Thanks also to my wife's family for their kindness. All 
of them have supported me in too many ways to mention here, but I wanted them to know 
that I will always appreciate their love and generosity. 



IV 



TABLE OF CONTENTS 

ACKNOWLEDGEMENTS iv 

LIST OF TABLES viii 

LIST OF FIGURES xi 

ABSTRACT xiii 

CHAPTERS 

1. INTRODUCTION 1 

Introduction 1 

Background 3 

Problem Statement 7 

Research Framework 8 

Significance 8 

Research Questions 9 

2. LITERATURE REVIEW 11 

Overview 11 

Service Capacity and Wait Times 12 

Relationships Among Wait Time, Perceived Service Quality, and Customer 

Satisfaction 19 

Perceived Service Quality 23 

Summary of the Literature 30 

3. RESEARCH FRAMEWORK AND HYPOTHESES 32 

Research Framework 32 

Research Hypothesis and Specific Research Questions 35 

4. METHODS 41 

Overview 41 

Study Location 41 

Data Sources, Sample Selection, and Data Collection Procedures 43 

Sample Size Calculations 48 

Study Variables 51 



Questionnaire Development and Validation 55 

Simulation Development, Verification, and Validation 67 

Data Analysis 70 

5. PRELIMINARY RESULTS 73 

Overview 73 

Part One: Historical and Concurrent Data 74 

Part Two: Student and Director Interviews 94 j 

6. MAIN PvESULTS 106 

Overview 106 

Part Three: Relationships Among PSQ, OSQ, Behavioral Intention, 

Perceived Service Time, Actual Service Time and Service Delays 106 

Part Four: Simulation Results 128 

7. DISCUSSION AND CONCLUSIONS 154 

Overview 154 

Summary and Discussion of Results 155 

Conclusions 1 63 

Limitations of Study 1 64 

Recommendations for Future Studies 166 



APPENDICES 

A. TEXT OF PRE-TEST COVER LETTER 169 

B. PRE-TEST QUESTIONAIRE - VERSION 1 170 

C. PRE-TEST QUESTIONAIRE - VERSION 2 175 

D. PRE-TEST FOLLOWUP POSTCARD 180 

E. RESPONSES TO PRETEST QUESTIONAIRE VERSION ONE 181 

F. RESPONSES TO PRETEST QUESTIONAIRE VERSION TWO 1 82 

G. PRETEST QUESTIONNAIRE WRITTEN COMMENTS 183 

H. HISTORICAL DATA SHEET 192 

I. DATA COLLECTION FORM 194 



VI 



J. SEMI-STRUCTURED OUTLINE FOR STUDENT INTERVIEWS 1 96 

K. SEMI-STRUCTURED OUTLINE FOR CO-DIRECTOR 

INTERVIEWS 198 

L. TEXT OF COVER LETTER FOR MAIN QUESTIONNAIRE 200 

M. MAIN QUESTIONAIRE 201 

N. FOLLOWUP POST CARD FOR MAIN QUESTIONNAIRE 206 

O. RESPONSES TO MAIN QUESTIONAIRE 207 

P. MAIN QUESTIONNAIRE WRITTEN COMMENTS 208 

Q. SIMULATION BLOCK DIAGRAMS 219 

R. SIMULATION PROGRAM CODE 225 

LIST OF REFERENCES 234 

BIOGRAPHICAL SKETCH 243 



Vll 



LIST OF TABLES 

Table Page 

4-L Required Sample Sizes for Selected System Parameters 49 

4-2. Responses to Pre-test Questions 13 and 21 62 

4-3 . Pre-test Item-total Statistics for SERVPERF Subscale 64 

4-4. Pre-test Item-total Statistics for Perceived Service Time 65 

4-5. Pre-test Item Communalities 66 

4-6. Rotated Component Matrix of Pre-test Responses 66 

5-1. Percentage of Questions by Profession 75 

5-2. Percentage of Questions by Subscription Status 75 

5-3. Percentage of Questions by Type 78 

5-4. Service Times in Minutes by Question Type 78 

5-5 . Service Times in Minutes Using Three Combined Question Types 79 

5-6. Percentage of Questions by Response Type Requested 81 

5-7. Service Time in Minutes by Response Type Requested 82 

5-8. Frequency and Percentage of Response Types by Question Type 82 

5-9. Percentage of Questions by Delay Status 83 

5-10. Percentage of Delays in Service by Time Needed 84 

5-11. Descriptive Statistics for the Average Number of Questions Answered 

by Month for the Past Ten Years 85 

5-12. Descriptive Statistics for Question Interarrival Times 86 

5-13. Significant P- Values for Interarrival Times by Hour of Day 89 



vui 



5-14. Descriptive Statistics for Question Interarrival Times in Minutes by 

Hour of Day 90 

5-15. Summary Statistics for Input Distributions 91 

5-16. Kolmogorov-Smirnov Tests for Exponentially Distributed Variables 91 

6- 1 . Questionnaire Sample Description 107 

6-2 . Reasons for Not Reponding 109 

6-3 : Results of One-Way ANOVA Procedures Measuring Response Bias 109 

6-4. Initial Rotated Component Matrix of Main Questionnaire Responses 

for the SERVPERF Sub-Scale 110 

6-5 . Main Questionnaire Communalities for the SERVPERF Sub-Scale Ill 

6-6. Final Rotated Component Matrix of Main Questionnaire Responses for 

the SERVPERF Sub-Scale 113 

6-7. Item-total Statistics for SERVPERF Sub-Scale 116 

6-8. Item-Total Statistics for Service Time Perceptions 117 

6-9. Final Sub-Scale Reliabilities 117 

6-10. Descriptive Statistics for Questionnaire Measures 117 

6-11. Descriptive Statistics for PSQ Items 118 

6-12. Correlations Between Study Variables 119 

6-13. PSQ by Level of Behavioral Intention 121 

6-14. OSQ by Level of Behavioral Intention 122 

6-15. PSQ by Level Perceived Service Time 123 

6-16. Perceived Service Quality by Delay in Service 125 

6-17. Percentage Below/ Above the Mean Actual Service Time by Q27 

Expected Time 128 

6-18. Student Utilization, Total Service Time, and Expected Number in 

System by Arrival Modifier at 20 Simulated Days 136 



IX 



6-19. Descriptive Statistics for Selected Comparisons Between Observed and 

Simulated Data 137 

6-20. Comparison of Simulated Queue Statistics Versus Exact Solution 137 

6-2 1 . Descriptives for Queue Statistics by Number of Students 148 

6-22. Descriptive Statistics for the Total Service Time, Time in Queue, 

Number in System, and Queue Length by Percentage Change in 

Research and Approval Time 149 

6-23 . Descriptive Statistics for Number Completed, Utilization Percentage, 

and Delay Percentage by Percent Change in Research and Approval 

Time 150 

6-24. Effectiveness of Service Capacity Improvements Under Normal Arrival 

Rates 151 

6-25 . Sensitivity of Optimal Solution to Changes in the Arrival Rate 153 



X 



LIST OF FIGURES 

Figure Page 

1-1 . Hypothesized Relationships 8 

3-1, Hypothesized Framework 40 

4-1. Scree Plot of Pre-test Data 66 

5-1 . 95% Confidence Intervals of Service Time by Question Type (In 

Minutes) 77 

5-2. 95% Confidence Intervals of Service Time Using Three Combined 

Question Types (in Minutes) 80 

5-3 . 95%) Confidence Intervals of Service Time by Response Type (in 

Minutes) 82 

5-4. 95% Confidence Intervals of Average Daily Arrivals by Month 86 

5-5. 95% Confidence Intervals of Interarrival Times in Minutes by Hour of 

Day 90 

5-6, Frequency Histogram of Historical Interarrival Times 92 

5-7. Frequency Histogram of Historical Total Service Times 92 

5-8. Frequency Histogram of Service Times for Question Group One 93 

5-9. Frequency Histogram of Service Times for Question Group Two 93 

5-10. Frequency Histogram of Service Times for Question Group Three 94 

6-1. Scree Plot of Main Questionnaire Responses 113 

6-2. Regression Equation Plot of PSQ and Predicted PSQ by Actual Service 

Time where PSQ=34.443+0.00 1 3 *(Total Service Time) 126 

6-3. Residual Plot of PSQ on Total Service Time 126 

6-4. Residual Plot of PSQ on Total Service Times Occurring within One 

Day 127 



XI 



6-5 . Expected Number in System for Six Replications of 20 Days 136 

6-6. Observed Versus Simulated Probability Density Functions (PDF) 138 

6-7. Observed Versus Simulated Cumulative Density Functions (CDF) 138 

6-8. 95% Confidence Intervals for Delay Percentage by Service Rate 

Modifier and Number of Servers 152 

6-9. 95% Confidence Intervals for Total Service Time (in Minutes) by 

Service Rate Modifier and Number of Servers 152 



xu 



Abstract of Dissertation Presented to the Graduate School 
of the University of Florida in Partial Fulfillment of the 
Requirements for the Degree of Doctor of Philosophy 

OPTIMIZING SERVICE CAPACITY E^J THE DRUG E^ORMATION SERVICE 

By 

Daniel Lee Halberg 

May 1998 

Chairman: Professor Charles D. Hepler 

Major Department: Pharmacy Health Care Administration 

Health services are often developed without appropriately analyzing the system's 
ability to meet the needs of the consumer, and attempts to improve quality and efficiency 
often do not succeed because of the complexity and dynamic nature of services. 
However, some organizations are using sophisticated techniques such as simulation to 
analyze service systems. This study had three primary objectives: (1) to develop a 
computer simulation model for a drug information service; (2) to investigate the 
associations among actual service time, service delays, and perceived service quality; and 
(3) to recommend system improvements based on the simulation. 

This study used both experimental and non-experimental methods. It was 
conducted at the Drug Information and Pharmacy Resource Center (DIPRC) at Shands at 
the University of Florida. Overall, seven hypotheses and three specific research questions 
were used to explore relationships among the study variables. Six data sources were used: 
(1) historical data sheets, (2) historical workload data, (3) data collection forms, (4) 
personal interviews, (5) service quality questionnaires, and (6) simulation runs. 



xiu 



The first three hypotheses tested the relationships among perceived service quality 
(PSQ), overall service quality (OSQ), and two measures of behavioral intention. A strong, 
positive relationship was found between PSQ and OSQ. In addition, relationships were 
found between PSQ and behavioral intention and between OSQ and behavioral intention. 
The remaining four hypotheses tested the relationships among PSQ, actual service time, 
service delays, and perceived service time. It was found that only service delays and 
perceived service time were significantly related to PSQ. However, perceived service time 
seemed more important than service delays with regard to PSQ. No relationship was 
found between actual service time and PSQ. Surprisingly, no practically significant 
relationships were found among actual service time, service delays, and perceived service 
time. 

A simulation model was constructed using GPSS/H (General Purpose Simulation 
System). The simulation was validated and found to be a credible model for analyzing the 
service system at the DIPRC. Exploration of the three specific research questions 
indicated that improving service times was more eflficient than staffing increases for the 
purpose of reducing the percentage of service delays. 



XIV 



CHAPTER 1 
INTRODUCTION 



Introduction 



Health services are often developed without careful consideration of the actual 
needs of the consumer, or without appropriately analyzing the service system's ability to 
meet these needs (Shostack, 1984). Services that fall short of meeting consumer needs 
must be modified or redesigned in order to improve the quality of the service. Two 
popular transformation paradigms that are often used to examine the quality problems 
related to service processes are total quality improvement (TQM) and business process re- 
engineering (BPR). 

The TQM paradigm has gained considerable support from the healthcare 
community as a transformation philosophy (Boerstler et al, 1996). TQM was pioneered 
by W. Edwards Deming in the 1950s, and focuses on the concept of'kaizen ", a Japanese 
word meaning the continuous incremental improvement of an existing process (Hammer 
and Champy, 1993). In essence, TQM is an organization wide commitment to steadily 
and continuously improve quality of the system (Schmele, 1993). 

BPR is currently of considerable interest in many service environments including 
health care. Many individuals confLise the concepts of BPR with TQM. Re-engineering 
has been defined as "the fundamental rethinking and radical redesign of business processes 
to achieve dramatic improvements in critical, contemporary measures of performance, 
such as cost, quality, service, and speed" (Hammer and Champy, 1993, p. 3 2). Thus, 
reengineering is not fundamentally about process improvement, but process re-invention 
(Hammer and Champy, 1993). 



However different the approaches, these two paradigms share a common, primary 
focus on consumer needs and the outcomes of a process. Often, the goal is to achieve a 
desired outcome (i.e., production of a product or service) while meeting several 
objectives, including: improved productivity, improved quality, reduced total process time, 
increased throughput, and reduced waiting times (Hammer and Champy, 1993; Tumay, 
1995). 

Unfortunately, it is clear that a large percentage of BPR and TQM eflForts fail to 
deliver any of the promised benefits (Boerstler et al., 1996; Hammer and Champy, 1993; 
Geisler, 1996; Kiely, 1995; Kotter, 1995, Rust et al., 1995). Although there have been 
many reasons given for these failures, part of the trouble inherently rests in the difficulty of 
understanding the complex and dynamic interdependencies of service systems. Because of 
this, some recent transformation projects have used sophisticated techniques, such as 
computer simulation, in order to analyze the relationships among the various components 
of service systems and the effects of implementing changes in the system (Tumay, 1995). 

This study had three primary objectives: (1) to develop a computer simulation 
model for a drug information service and to validate the model against the existing system; 
(2) to investigate the associations among actual service time, service delays, and 
evaluations of perceived service quality in a drug information service setting; and (3) to 
recommend system improvements based on the simulation model; in particular those 
improvements that reduce the time required to respond to consumer questions and 
information requests. 

The following sections of this chapter will provide background information 
describing the underlying concepts used in the research and will introduce the problem 
statement for the proposed study. The background will specifically address a) how 
queuing theory can be used as a basis for establishing optimal levels of service capacity; b) 
how service capacity can affect costs, waiting times, and perceived service quality; and c) 



the measurement of perceived service quality. Each of these issues, however, will be 
discussed more critically in the next chapter. 

Background 

Queuing Theory and the Simulation of a Service System 

The goal of most queuing theory and simulation based models is to understand the 
behavior of a particular system and to make decisions regarding the system based on the 
behavior of the model (Sella, 1992). Many real-world systems that involve random arrival 
and service rates can be examined by structuring them as queuing problems. Essentially, 
these problems can be evaluated in two ways, through either a closed form solution or an 
open form solution. 

Queuing theory uses closed form mathematical relationships to achieve exact 
answers to waiting-time and waiting-line problems. Simulations of queuing systems are 
open form, computationally dynamic models that describe the behavior of a system with 
respect to time. Open form solutions are used when there are no known equations for the 
operating characteristics, such as waiting time in the queue, for the system of interest. 
When available, closed form solutions are usually preferred to open form solutions 
because of their exactness and theoretical power. However, simulations are used to study 
waiting situations when closed form solutions are too complex or are not available. For 
example, simulations are used when the properties of the system to be modeled violate the 
underlying assumptions of queuing theory, or when the researcher desires more 
information than the queuing theory approximation provides (Krajewski and Ritzman, 
1990;Maryanski, 1980). 



Service Capacity and its Relationship to Perceived Service Quality 

i 
Service capacity can be defined using the queuing theory framework as a function 

of the staffing level (s) and the service rate (|J.) Staffing level refers to the number of I 

employees available to serve consumers. The service rate refers to a variable amount of I 

time required by an employee to complete a service request given their personal ability, |: 

their equipment, and the organization of the work (Krajewski and Ritzman, 1990; 

Lovelock, 1987; Winston, 1991). When there is a shortage of capacity relative to demand |; 

(X), queues form, and total time in the system increases (Lovelock, 1987). It has been 

demonstrated in the literature that as waiting time increases, evaluations of perceived 

service quality and customer satisfaction are negatively affected (Bolton and Drew, 1994; 

Clemmer and Schneider, 1993; Davis and Vollmann, 1990; Davis, 1991, Dube'-Rioux et 

al, 1989; Hui and Tse, 1996; Katz, Larson, and Larson, 1991, Taylor, 1994a; Tom and \ 

Lucey, 1995). 

Two approaches have been used to reduce the adverse effects of waiting on 

customer satisfaction and perceived service quality in the literature: perceptions 

management and operations management. Perceptions management is an approach that 

attempts to reduce the perceived wait times of the consumers of a system through the 

creative use of distractions, apologies, queue information (e.g., place in line and estimated 

wait times), and the manipulation of perceived pre-service and post-service waits. 

According to reports in the literature, perceptions management has met with limited 

success; however, it is still unclear as to how successful these techniques are across a 

variety of service settings (Clemmer and Schneider, 1989a, 1989b, and 1993; Hui and Tse, 

1996). Operations management is an approach that attempts to reduce the actual wait 

times of the consumers of a service system using scheduling techniques, queue 

management, and work flow changes. At the heart of the capacity issue is the 

development of appropriate queuing systems that utilize capacity to its best advantage 



(Lovelock, 1987). This research focused primarily on the use of operations management 
techniques, specifically queuing theory and simulation, to manage service capacity. 

Measurement of Perceived Service Quality 

Four basic characteristics apply to most services. (1) benefits received from 
services are largely intangible; (2) services are activity focused rather than product 
focused; (3) services are simuhaneously produced and consumed; and (4) the consumer 
participates in the production process (Gronroos, 1990). Because of these characteristics 
it is generally recognized that service quality is harder to evaluate than product quality 
(Heskett, 1987; Parasuraman et al., 1985). 

The concept of quality, however, is difficult to define under any circumstance. 
Nevertheless, we think of quality in terms of the superiority, excellence, or value of a 
product or service (Christensen and Penna, 1995; Westgard and Barry, 1986). Two basic 
methods have been used to assess quality in organizations. The first method uses 
objective indicators (e.g., age of equipment, number of defects, or consumer time in the 
system) as measures of quality. The second method uses subjective indicators (e.g., 
perceived service quality, customer satisfaction, or employee satisfaction) as measures of 
quality. 

Service quality has been defined as the relative superiority of an organization and 
the services it provides (Parasuraman et al., 1988). Perceived service quality is concerned 
with the measurement of consumer attitudes regarding an organization's service quality. 
There has been significant debate in the literature concerning the dimensions and 
measurement of perceived service quality (Cronin and Taylor, 1992, 1994; Gronroos, 
1993; McAlexander, 1994; Parasuraman et al., 1985, 1988; Peter et al., 1993). A 
perceptions-only scale, derived from the SERVPERF scale (Cronin and Taylor, 1 992) will 
be used as a measure of perceived service quality in this research primarily because it more 



practical than many of the available instruments and because it avoids some of the 
measurement issues related to the use of difference scores (Cronin and Taylor 1992, 1994; 
Peter et al., 1993; Zeithaml et al., 1996). 

Drug Information Services 

The first formally recognized drug information center (DIC) was established at the 
University of Kentucky in 1962 and by 1995 more than 175 organizations maintained 
DICs (Parker, 1965; Rosenberg et al, 1995, Vanscoy et al., 1996). The initial role of 
these centers was to evaluate and compare drugs and to promote rational drug therapy; 
however, the role of many centers has evolved to include educational activities, 
medication policy development, and outcomes research (Beaird et al., 1992, Vanscoy et 
al, 1996). Health care professionals are currently challenged by the necessity to keep up 
with the latest developments in new drugs and advances in therapies. Many of the DICs 
have come into existence because of the recognition by management that it is not efficient 
to have practitioners review the literature and identify solutions to all of the drug therapy 
problems they encounter. As such, DICs were developed as a central, organized approach 
to meeting these needs and to help disseminate drug information to the medical and 
nursing staff (Skoutakis, 1987; Smith, 1988), 

In the current healthcare environment, however, the outlook for DICs is uncertain. 
Several factors are currently placing increasing pressure on DICs to provide increased 
levels of service. First, advancing information technologies and managed care influences 
are forcing DICs to provide the highest quality service with near instantaneous access to 
information. Second, not only are DICs responsible for dispensing information regarding 
clinical decisions, they are also being asked to document their impact on patient care using 
outcomes measurement tools derived from disciphnes such as pharmacoepidemiology and 



pharmacoeconomics. Third, many DICs are also required to participate in scholarly 
research and educational activities (Skoutakis, 1987; Vanscoy et al, 1996). 

While there is a recognized need for drug information services, the current era of 
cost containment and outcomes management presents a dilemma for health organizations, 
hospitals, and universities who are now required to justify support for non-profitable 
programs. Even though DICs are being asked to provide more services, cutbacks in 
staffing are not uncommon when flinds available for such programs are reduced (Mailhot 
and Giacona-Dahl, 1987; Skoutakis, 1987; Vanscoy et al., 1996). 

Thus, the ability of DICs to evaluate their service in terms of effectiveness or 
outcomes is critical in maintaining a DIC in the current health care environment. Although 
there have been a number of articles documenting the activities of drug information and 
toxicology resources, very few of these articles have addressed drug information quality or 
effectiveness of DICs under resource constraints (Lilja, 1985; Rosenberg et al, 1995, 
Skoutakis, 1987). As John Lilja states, "it is safe to say that we know astonishingly little 
on how to optimize resources for drug information programs" (1985, p. 412). 

Problem Statement 

Determining sufficient service capacity in a drug information service is a 
challenging issue for managers trying to maintain acceptable levels of service quality. 
However, questions regarding capacity often involve decisions related to the acceptable 
amount of time required to deliver the service (e.g., answer a drug-therapy question). 
Unfortunately, the behavior of queuing systems is deceptively complex and often non- 
intuitive. If staffed according to "common sense" approaches, many systems are unable to 
handle the workload. 

The mathematics of queuing theory show that inadequate service capacity can 
greatly increase the waiting time before service is completed. Consequently, a consumer's 



overall time in the system will often be longer than intended. Consumers who wait long 
periods of time for service may be more Ukely to downgrade the quality of the service, 
even though other aspects of service performance may have been delivered competently 
(Taylor, 1994a; Taylor and Claxton, 1994). Unfortunately, however, service processes 
are often the most complex systems to understand because they frequently depend on the 
random nature of arrival processes and service times and the dynamic interdependencies of 
system behavior (Tumay 1995). 

Research Framework 

This research used a queuing paradigm to evaluate the relationship between service 
capacity in a drug information service and the length of time that a consumer must wait to 
obtain a response to a question or information request. In addition, the relationship 
between waiting time and consumer evaluations of perceived service quality was 
determined. It was proposed that a relationship existed between service capacity and 
service time or delays in service, and between service time or delays in service and 
evaluations of perceived service quality (Figure 1-1). This framework is developed in 
more detail in chapter three. 







^ 












Service 
Capacity 


Service Time 
and Delays 


Perceived 
Service quality 






w 













Figure 1-1. Hypothesized Relationships 



Significance 



Although simulation research involving staffing patterns has been conducted in a 
variety of health related systems, including nursing, psychiatry, and emergency 



departments, no research has been published describing the system behavior of drug 
information services using a queuing paradigm. In addition, some research has been 
conducted examining the relationship between perceived service quality and service 
capacity; however, this area is still in the rudimentary stages of development. This 
research would help to expand existing knowledge in these areas. 

From a more practical standpoint, various researchers have proposed a positive 
relationship between improved service quality and increased revenues or improved 
productivity (Gronroos, 1990). However, this benefit may be less useful to "free" service 
programs that are funded under fixed or shrinking budgets. In this environment, it is more 
valuable to suggest two possible impacts that service capacity decisions can have on 
intentions and utilization. First, if service quality is perceived as inferior by consumers 
then they may be less likely to rely on the service. Thus, inadequate service capacity may 
reduce the perceived value of the service. If utiUzation decreases then it becomes more 
difficult to justify the service's existence. The service may be suspended because 
consumers feel the quality of a service is poor and have decided not to use it, regardless of 
their actual need. 

Second, budgetary constraints often make it difficult to justify increases in staffing 
or other improvements in service capacity without significant evidence of need. By using 
the information obtained from simulation models and perceived service quality surveys, it 
may be easier to demonstrate current shortfalls in quality, current and projected demand 
levels, and the positive and negative consequences of changes in service capacity. 

Research Questions 

There are four questions that this research will attempt to address: 
1. Can information regarding staffing levels, service rates, call arrival patterns, and 
system structure be used to build a simulation model that is a reliable and valid 



10 

substitute for the actual dmg information service for the purpose of capacity 
planning? 

2. How do staffing levels, arrival and service rates, and system structure affect the 
important performance characteristics of the system? (Adapted from Krajewski and 
Ritzman(1990)): 

A. Queue Length: This is the expected number of consumer information requests 
and questions in the system at a given point in time. 

B. Service time: This is the expected total service time required to deliver a 
response to a question or information request. This is measured from the 
arrival of a question or request for information into the information service 
until the delivery of a response. 

C. Utilization Rate: This is the collective utilization of the service facilities reflects 
the percentage of time the service personnel are busy (as opposed to the time 
they are idle). This is described as a ratio of the amount of time the server was 
busy over the total time measured. 

3 . How does total service time relate to consumers' perceived service quality? How 
do delays in service relate to consumers' perceived service quality? Could these 
relationships be used in the simulation model to reflect the impact of changes in 
service capacity on consumers' perceived service quality? 

4. What are the critical variables that affect the simulation model? Based on these 
variables, what management rules can be recommended to improve service capacity 
and reduce the response time of the system? 



CHAPTER 2 
LITERATURE REVIEW 



Overview 



Chapter one introduced four important concepts and their relationship to this 
research. First, the capacity of a service system can be described in terms of a queuing 
paradigm. Second, complex queuing systems can be modeled using computer simulation. 
Third, waiting times can be influenced by changing the service capacity. Fourth, it was 
proposed that there is an inverse relationship between waiting time and evaluations of 
service quality. 

This chapter critically reviews the previous research related to the concepts 
mentioned above and their application to this project. It will begin by presenting an 
overview of queuing theory. Second, methods for improving the performance (i.e., 
reducing waiting times) of queuing systems will be described. Third, evidence 
demonstrating the relationship between wait time and consumers' evaluations of services 
(in terms of service quality and satisfaction) will be reviewed. It will then discuss how 
managers often underestimate acceptable wait times and how consumers' overestimate the 
time waited. Fourth, the chapter will examine the conceptual basis of perceived service 
quality and describe why this study will use a perceptions-only instrument to measure 
perceived service quality. The chapter will end with a description of the practical 
relationship between service quality and behavioral intention. In chapter three, the 
research framework will be presented and the research hypothesis for this project will be 
developed. 



11 



12 
Service Capacity and Wait Times 

Overview of Queuing Theory 

There are three primary components of a queuing problem. The first is the input 
source, defined as the population of potential entrants into the service system. We can 
describe this population in terms of its size, the nature or urgency of need, and the arrival 
distribution. The size of the input source may be infinite or finite, depending on whether 
the number of customers in the system significantly affects the arrival rate. The nature or 
urgency of need influences the relationship between waiting time or queue length and 
reneging or balking (i.e., leaving without service) (Krajewski and Ritzman, 1990; Winston, 
1991). 

The arrival distribution is a probability distribution that describes either the number 
of arrivals per unit time or the time between arrivals (i.e., the interarrival time) (Krajewski 
and Ritzman, 1990; Winston, 1991). In effect, many managers responsible for assessing 
service capacity inappropriately assume some constant or narrowly defined arrival pattern. 
There are, however, other probability distributions that can describe customer arrival 
streams better than a constant. For instance, arrivals per unit time are sometimes assumed 
to be Poisson distributed and interarrival intervals are often approximated by some form of 
the gamma distribution, usually the exponential (Krajewski and Ritzman, 1990; Winston, 
1991). 

The second component of a queuing system is the service process. We describe 
the service process in terms of the service arrangement and service rate distribution. The 
service arrangement comprises the organization of the service (i.e., workflow), the number 
of servers available to handle the arrivals, and the number of lines leading to those servers. 
The service rate distribution is a probability distribution governing the amount of the time 
a server takes to service a customer. Often, the service rate is described by the 



13 



exponential, Weibull or Erlang distributions (Krajewski and Ritzman, 1990; Law and 
Kelton, 1991; Winston, 1991). 

The third component of a queuing problem involves the queue discipline, also 
sometimes called the priority discipline. The queue discipline refers to the order in which 
customers are processed through the queue. There are several common queue disciplines, 
including FIFO or FCFS (first come first served), LIFO or LCFS (last come first served), 
SPTF (shortest processing time first), and LPTF (longest processing time first). The latter 
two are more specifically termed priority queue disciplines because customers are 
categorized based on their expected length of service. These categories are given a 
priority level, in which those customers allocated to higher priority levels go before those 
customers with lower priority. Within each category, however, customers are serviced in 
a standard queue discipline such as FCFS (Krajewski and Ritzman, 1990; Winston, 1991). 

Two other queue disciplines exist which are more difficuh to model. The first is 
"Shortest Lead Time", in which the arrival with the shortest time between the current time 
and the promised time has a highest priority regardless of when they entered the system. 
The second is "Arbitrary Priority", where the service order and time are dependent on the 
servers' preferences or some form of undetermined triaging mechanism. These disciplines 
are often modeled as a SIRO (service in random order) discipline (Larson, 1987, 
Schwartz, 1975). 

Most queuing models depend on a steady state system for estimating queue 
statistics for a varying number of servers. A system is in steady state if 



(1) the number of servers, the average arrival rate, and the average service 
rate are not changing, (2) the average arrival rate is less than the average 
service rate times the number of servers, and (3) these conditions have 
existed for a substantial period of time... The opposite of steady state is 
transience, which refers to the behavior of the system during the period 
following some change (McClain and Thomas, 1985, p. 550). 



14 



It is not hard to imagine that if the service rate is less than the arrival rate, then the 
queue will grow without bound because the system is not physically capable of handling 
the volume of arrivals. However, it is also mathematically true that the queue length will 
approach infinity when the service rate equals the arrival rate. This can be illustrated using 
two queuing theory resuhs based on the M/M/1 (single server, single queue) model. 
First, the utilization rate (p) equals the arrival rate (k) divided by the service rate ((j.) 
(Winston, 1991). Second, the expected number of customers in a line (L) is a fLmction the 
utilization rate (p) such that (Winston, 1991) 



/^ — ,, (Equation 1-1) 



X = —^ (Equation 1-2) 

1-p 



Notice that as the arrival rate approaches the service rate, the utilization rate 
approaches one. Therefore, as the utilization rate approaches one then /. approaches 
infinity (i.e., 1 divided by -> qo). This is not an intuitive result, and this is the primary 
pitfall of naive staffing models (i.e. models that do not account for the effect of random 
variation on queue behavior). When managers try to match the service capacity exactly to 
the demand, long waiting lines will occur. 

There are six options typically available for improving the performance of systems 
under a queuing paradigm: (1) add servers, (2) increase the service rate, (3) increase 
queue size, (4) change the distribution of arrivals, (5) reduce the variance in service times 
or interarrival times, (6) change the queue discipline. These options are discussed in more 
detail below. 



15 

Methods for Improving System Performance 

First, adding servers (i.e., increasing staffing) is usually the most frequently 
considered method for improving service capacity; however, it can also be costly and often 
is the least efficient method. As you add more servers, the marginal impact that each new 
server has on the system decreases. There is also the tradeoff of balancing the utilization 
rates with excess capacity. Management is interested in maintaining high utilization, but 
this objective may have an adverse impact on the other operating characteristics. For 
instance, when utilization is too high, workers may have trouble adapting to changes in the 
service demand. However, when utilization is too low workers will have too much idle 
time (Krajewski and Ritzman, 1990). There may also be regulatory or accreditation 
restrictions that limit the minimum number of servers in a particular setting (Duraiswamy 
et al., 1981). There have been three different approaches used in the literature to optimize 
staffing levels: 

1 . Adjust staff levels in terms of the actual number of service personnel 
(Duraiswamy et al., 1981; Hammond and Mahesh, 1995; Ishimoto et al., 
1990; Lamy et al., 1970; Saunders et al., 1989; Sumner and Hsieh, 1972). 

2. Adjust staff levels based on the number of full time equivalents (FTEs) 
(Hashimoto et al., 1987). In many settings, a more appropriate method of 
optimizing staffing is to consider the number of FTEs rather than the actual 
number of persons. In this way it becomes easier to consider part-time 
employees, flill-time employees that devote parts of their work day to 
different tasks, and employees that have many different simultaneous tasks 
to complete. 

3. Adjust staff levels based on a percentage of maximum (rather than 
expected) workload (McHugh, 1989). In instances when there are extreme 



16 



demand shifts, it is sometimes necessary to anticipate staffmg levels for the 
maximum rather than the expected workload. Models using this approach 
usually discuss staffmg levels in terms of a percentage of the maximum 
workload. 

Second, queue sizes and wait times can be improved by increasing the service rate 
rather than the staffmg level. Service rates can be improved through new technologies, 
training, and workflow redesign (Krajewski and Ritzman, 1990). For instance, Carruthers 
(1970) examined the work turnover rate in a laboratory setting. It was found that the 
purchase of new equipment was more cost-effective than increasing staffmg. Also, Kumar 
and Kapur (1989) found that increasing hospital nurses' shift length from eight to twelve 
hours was more beneficial than the addition of staff. Chin and Sprecher (1990) and Ozeki 
and Ikeuchi (1992) both found that workflow changes were at least as important as 
staffing increases in improving system wait times. However, increasing service rates do 
not always improve system wait times, especially when service times are already relatively 
short. For example, Lamy et al. (1970) found that only a fraction of the total waiting time 
was related to the actual service time in a pharmacy setting. In this case, the staffing level 
and variations in arrival times were more significant predictors of queue waits than the 
service time. 

Third, increasing the queue size may be an option if customers are being turned 
away because they cannot even enter the system (e.g., busy telephone line). Since queues 
are stochastic systems, there may be times when a queue exceeds its limitations, even 
when p is less than one (i.e., arrival rate is less than the service rate). If this happens too 
often, then queues will have high rejection rates. Costs of increasing the queue size will 
vary dramatically depending on the type of queue. For instance, the addition of several 
telephone lines may be inexpensive when compared to the construction of a larger waiting 
area (Krajewski and Ritzman, 1990). 



17 



Fourth, changing the arrival rate is usually one of the most subtle and overlooked 
areas of improvement. Some examples of how arrival rates can be influenced include (1) 
informing consumers of typically idle or slow periods so that you might attract them to use 
services during these times instead of peak times, (2) get customers to use alternate routes 
for obtaining the same information such as a fax-back service or web-site, and (3) 
schedule appointments with some or all consumers so that arrivals are less random and are 
reduced during peak times (Krajewski and Ritzman, 1990). Few studies have discussed 
techniques for modifying the arrival rate into the system, other than appointment systems. 
One study conducted by Reilly et al. (1978) discussed the impact of optimizing staffing in 
conjunction with a patient delay-scheduling model, where patients are given a delay time 
before being admitted to the system. This delay manifested itself either as an appointment 
or an anticipated waiting time. This had two potential benefits. First, since patients had 
better knowledge of the length of the wait, they were not necessarily bound to the clinic 
and could spend time elsewhere; hence, the patients could improve the quality of their 
waits. Second, although not directly reported by Reilly et al., the realized interarrival 
variance should have decreased, allowing for a more accurate prediction of staffing 
requirements. 

Fifth, reducing the variance in service times or interarrival times can also reduce 
queue lengths. By examining the steady state equations for the M/G/k queue 
characteristics, it can be shown that reducing variance can have a substantial effect on 
reducing the effective waiting times and queue lengths. Methods of reducing variance in 
service times have been discussed in the contexts of work design, facility design, total 
quality management (TQM), and statistical process control (SPC). For example, 
redesigning or standardizing work flow in order to eliminate errors, backtracking, and 
rework would help reduce the variance in service times by ehminating inconsistent work 
patterns. It should be noted that these techniques are often the same ones used to reduce 



18 



service times; hence, when making changes in the system that are designed to reduce 
service times, the variance in service times is also often reduced. 

Reducing the variance in interarrivals may achieve similar benefits. As mentioned 
previously, reducing the variance on arrivals might take the form of appointments or 
blocking types of arrivals into specific time slots (Kleinrock, 1975; Konz, 1990; Lamy et 
al., 1970; Law and Kelton, 1991, Reilly et al., 1979; Westgard and Barry, 1986; Winston, 
1991). 

Sixth, changing the queue discipline has also been shown to affect system 
performance in terms of waiting times and line lengths. For example, it has been shown 
that using the shortest-processing-time-first (SPTF) discipline will decrease the variance of 
the wait times in a system, thereby decreasing the number of long service times due to 
random variation. However, in many health care services, urgency (i.e., priority) plays a 
significant role on queue behavior due to prioritization and preemption of service requests 
based on need (e.g., emergency care), so it would often be impossible to strictly adhere to 
a SPTF discipline. However, it may still be possible to service non-urgent arrivals using a 
SPTF discipline (Krajewski and Ritzman, 1990; Law and Kelton, 1991; Schriber, 1991; 
Winston, 1991). 

In the previous section, an overview of queuing theory was given in order to 
describe the important variables and concepts that are often used in queuing and 
simulation models to model waiting times, service times, and delays. In addition, methods 
for improving system performance were reviewed. In the next section, the relationship 
among consumer waiting time, perceived service quality, and customer satisfaction are 
discussed. 



19 
Relationships Among Wait Time, Perceived Service Quality, and Customer Satisfaction 

Waiting Time and Perceived Service Quality 

Only three studies were found that examined the effect of wait time on service 
quality, and none of the studies used the SERVQUAL or SERVPERF scales to measure 
perceived service quality. Furthermore, these studies considered service delays rather than 
queue waits. However, the results of these studies do indicate that wait times can 
adversely affect evaluations of service quality, 

Taylor (1994a) proposed a framework called "The Wait Experience Model" for 
describing the relationship between the wait experience and the overall service evaluation. 
This model was evaluated using a sample of airline passengers who experienced pre- 
boarding delays in their flight plans. Taylor found that customers' overall evaluations of 
service quality were primarily related to their level of anger and its associated feelings of 
annoyance, irritation and frustration. Anger could be caused by (1) the customer's 
uncertainty about the length of the delay, (2) the actual length of the delay, (3) the 
customer's perception of the service provider's control over the delay, and (4) the degree 
of filled time. 

In flirther research related to this model, Taylor and Claxton (1994) found that 
individuals who encountered pre-boarding delays were less likely to be satisfied with the 
quality of the other services offered during the flight. Similarly, Dube'-Rioux et al. (1989) 
found in a restaurant setting that the timing of the delay (i.e., pre-service, in-service, or 
post-service) and level of customer need were significant in evaluations of service quality. 

Waiting Time and Customer Satisfaction 

A much more thorough examination has been conducted in the literature 
concerning the relationship between wait time and customer satisfaction. Customer 



20 



satisfaction is a consumer's post-service reflection regarding how well the service 
compared with their expectations. Customer satisfaction occurs when the perceived 
performance of the service exceeds expectations, and dissatisfaction occurs when 
performance is lower than expectations (Bolton and Drew, 1991, Oliver, 1993). 

Evaluations of customer satisfaction and perceived service quality usually rely, 
either implicitly or explicitly, on the confirmation-disconfirmation paradigm, in which 
consumer evaluations are based on a confirmation of expectations. In addition, the service 
quality literature generally conceptualizes customer satisfaction as antecedent to perceived 
service quality. (Boulding et al, 1993; Bitner, 1990; Cronin and Taylor, 1992; 
Parasuraman et al., 1985; Parasuraman et al., 1988; Taylor, 1994b; Taylor and Cronin, 
1994). 

Since the constructs of customer satisfaction and perceived service quality share 
some of the same dimensions, often a common theoretical basis, and perhaps a causal 
relationship, it is probable that waiting time and service delays affect both constructs. 
Therefore, an examination of the literature involving waiting time, service delays, and 
customer satisfaction is important, especially since many of these studies do not 
adequately describe their satisfaction instruments or their definitions of satisfaction. This 
makes it difficult to ascertain whether the authors are measuring customer satisfaction or 
service quality or both. 

Hui and Tse (1996) tested a service evaluation model for a computerized course 
registration service at a university. Katz, Larson, and Larson (1991) conducted a study 
with bank customers. Studies done by Davis and Vollmann (1990) and Davis (1991) 
concerned the length of waits in a fast food restaurant and the percentages of satisfied 
customers. In all of these studies, the results indicated an inverse relationship between 
perceived wait times and customer satisfaction. They all provide evidence that as wait 
times increase customer satisfaction decreases. However, Davis (1991) suggested that a 
non-linear relationship exists between waiting time and satisfaction. The proposition of a 



21 



non-linear relationship between waiting and consumer preference was also suggested by 
Richard Larson (1987) in a discussion regarding the perceived utility of waiting. 

In addition, Tom and Lucey (1995) found two important results concerning 
expected waiting times and customer satisfaction in a supermarket setting. First, their 
results found that customers were more satisfied in situations where the wait was shorter 
than expected compared with situations where the wait was longer than expected. More 
importantly, however, the researchers found that it was the reason for the wait that most 
affected the levels of satisfaction. If the customers blamed the store for the unexpected 
wait, then satisfaction with the store tended to decrease. However, if customers attributed 
the wait to something outside of the store's control then no changes in the satisfaction 
levels were evident. 

Manager and Consumer Estimates of Waiting Time 

As discussed above, the literature seems to establish a relationship between the 
amount of time consumers wait for service and their evaluations of service quality. 
However, there is evidence to suggest that managers and consumers perceive the wait 
experience differently. Davis and Vollmann (1990) and Davis (1991) found that managers 
tended to overestimate the duration of what a customer would consider an acceptable 
delay. This result is consistent with the gap analysis research conducted by Parasuraman 
et al. and others (Brown and Swartz, 1989; Parasuraman et al., 1985; Swartz and Brown, 
1989). Furthermore, Katz, Larson, and Larson (1991) conducted a study with bank 
customers that found that individuals tend to overestimate their waits. In addition, the 
researchers asked customers to define what they would consider an acceptable wait. 
Customers with longer definitions of acceptable wait times tended to be more satisfied 
than customers with shorter definitions. Thus, if managers overestimate what consumers 



22 



consider an acceptable delay and consumers overestimate the time they have waited, there 
is potential for unintended magnification of the actual wait experience. 

Linking Service Evaluations to Service Capacity 

Two methods have been used to predict how changes in service capacity will affect 
consumer evaluations of the service. The first method is to operationalize the service 
quality dimensions as measurable variables. For instance, Ozeki and Ikeuchi (1992) 
studied service evaluation in a telephone service setting using a workflow simulator and 
measures of service quality (MOSQs) for different components of the work process. The 
authors defined an MOSQ as some quantifiable operationalization of service quality, such 
as response time. Using simulation, the authors were able to see the effects of system 
changes on these MOSQs, where a change in the desired direction implicitly represented 
an improvement in quality. 

The second method is to describe the relationship between changes in system 
performance and service evaluations in terms of a cumulative probability distribution. For 
instance, Buxton and Gatland (1995) conducted an extensive simulation model that used a 
customer satisfaction index to model the effects of work-in-process (WIP) and delivery 
time on levels of customer satisfaction. This customer satisfaction index was expressed as 
a probability distribution, where a delivery time (e.g., delivery within seven days) was 
equated to an expected level of customer satisfaction. This approach does not produce 
exact results, however, it can allow managers to examine relationships between service 
capacity and perceived service quality. 

The previous section evaluated the literature describing how consumer wait times 
might influence evaluations perceived service performance, such as perceived service 
quality and customer satisfaction. Furthermore, it was shown how managers' and 
consumers' perceptions of the wait experience could potentially magnify this relationship. 



23 



In addition, two methods of building this relationship into a simulation model were 
summarized. The next section will identify the current state of development in the 
measurement of perceived service quality and present research indicating that perceived 
service quality is associated with intended future behavior. 

Perceived Service Quality 

Overview of the Conceptual Basis of Perceived Service Quality 

Understanding how consumers of a service evaluate service quality is an issue of 
importance to managers. It is clear that if a service provider understands how consumers 
evaluate a particular service, managers can use these evaluations to focus on ways to 
improve. However, developing a model to effectively evaluate service quality has been an 
evolving and highly debated research issue (Gronroos, 1990). 

Four basic characteristics apply to most services. First, services are essentially 
intangible. Second, services usually focus on activities or information rather than 
products. Third, services are produced and consumed simultaneously (i.e., they cannot be 
inventoried). Fourth, the consumer is a participant in the production process (Gronroos, 
1990, Lovelock, 1980). As Shostack (1984) describes them, "[sjervices are unusual in 
that they have impact, but no form" (p. 134). Because of the intangibility of service 
performance and the aspect of simultaneous production and consumption, it is generally 
more difficult to develop quality indicators for services than for products (Heskett, 1987, 
Parasuraman et al , 1985). 

Quality, however, is an abstract concept and hard to adequately define. At its 
most basic level it can be generally thought of from the point of view of Philip Crosby's 
"conformance to requirements" or J.M. Juran's "fitness for use" definitions (Westgard and 
Barry, 1986, p. 5). However, it may be more useful to consider quality as the inherent or 



24 



implicit degree of excellence, value, or worth of a product or service measured by its 
ability to satisfy a given need (Christensen and Penna, 1995, Westgard and Barry, 1986), 
We can usually describe quality in terms of one or more of three dimensions: (1) a 
structural dimension (i.e., the attributes of the facility, equipment, human resources, and 
organizational structure that are the components of process); (2) a process related 
dimension (i.e., the activities that make up the process); and (3) a technical or outcome 
related dimension (i.e., the end result or effect of a process) (Angaran, 1993; Gronroos, 
1990). 

Much of the early research concerning service quality focused primarily on 
identifying measurable dimensions of service quality. Two early developments of service 
quality were Lehtinen and Lehtinen's Interaction Quality and Gronroos 's Perceived 
Service Quality (PSQ) models. 

The basis of interaction quality was founded on the premise that service quality is 
formed through the consumer's interaction with the elements of a service organization. 
This model suggested there were three elements of interaction quality: physical quality 
(i.e., the tangible aspects of the service such as the equipment or facility); corporate 
quality (i.e., the image of the service provider); and interactive quality, (i.e., the 
consumer's interactions with the service provider and other consumers) (Lehtinen and 
Lehtinen, 1982 as cited in Gronroos, 1993; Parasuraman, Zeithaml, and Berry, 1985; and 
Swartz and Brown 1989). 

The PSQ model developed by Gronroos (1988, 1990) used the confirmation- 
disconfirmation paradigm to define total perceived service quality as the gap between 
expected service quality and experienced service quality. Expected service quality is the 
level of quality that the consumer expects to receive. Experienced service quality is made 
up of three basic dimensions: (1) technical quality (i.e., quality of the outcome of service); 
(2) functional quality (i.e, quality of the service process); and (3) perceived image of the 
organization (Gronroos 1988, 1990, 1992, 1993). 



25 



Parasuraman, Zeithaml, and Berry (1985, 1988, 1991) built upon the conceptual 
basis formed by interaction quality and PSQ in their development of a gap analysis model 
and the SERVQUAL instrument. Based on extensive focus group interviews, their initial 
work described five potential gaps in the provision of services: (1) consumer expectation - 
management perception gap, (2) management perception - service quality specification 
gap, (3) service quality specifications - service delivery gap, (4) service delivery - external 
communications gap, and (5) expected service - perceived service gap. Two flmdamental 
conclusions were developed from the use of this model. First, perceived service quality is 
a multidimensional construct; however, interaction with the service provider is the most 
important variable in the assessment service quality. Second, there are often significant 
perception gaps between the consumers and providers of a service, indicating the service 
providers do not always understand the expectations of consumers (Brown and Swartz, 
1989; Parasuraman et al., 1985; Swartz and Brown, 1989). 

Building upon the gap analysis model, Parasuraman, Zeithaml and Berry (1985, 
1988, 1991) continued to develop and validate an instrument called SERVQUAL. 
SERVQUAL is the most widely known measurement of perceived service quality and its 
development has had considerable impact on the systematic advancement of research 
concerning perceived service quality of consumer services (Gronroos, 1993). Like the 
PSQ model before it, SERVQUAL is based on the disconfirmation of expectations 
paradigm (Gronroos, 1990; Parasuraman et al., 1988). 

The SERVQUAL scale consists of 22 item pairs measuring five dimensions of 
service quality: (1) tangibles, (2) rehability, (3) responsiveness, (4) assurance, and (5) 
empathy. Factor analysis and reliability testing on data from four service industries were 
used to develop the final SERVQUAL scale (Parasuraman et al., 1988). 



26 
Measurement of Perceived Service Quality 

Although SERVQUAL is perhaps the most widely used instrument to measure 
service quality, it has received criticism from other researchers who have begun to 
examine the application of SERVQUAL in various settings. There have been three 
general areas of concern regarding SERVQUAL: (I) use of difference scores, (2) 
dimensionality of SERVQUAL, and (3) external validity. 

One of the most debated issues that have surfaced concerning the SERVQUAL 
instrument is the use of difference scores as prescribed by the confirmation- 
disconfirmation framework. The first problem with the use of these scores is the timing of 
the expectation measurement. Expectations may be altered during and after the service 
experience, suggesting that expectations measured during or after service delivery are not 
accurate representations of the expectations of the consumer at the point in time when 
service commenced (Carman, 1990; Gronroos, 1993). 

The second problem concerns the implicit nature of the perception measure. Since 
the perception measure is already a comparison between what the consumer expected and 
what they perceived as the actual service event, the expectation is already implied in the 
perception measure. If expectations and perceptions are both measured then expectations 
are, in effect, measured twice (Gronroos, 1993, Oliver 1993). 

The third problem Hes in the questionable reliability of difference scores (Brown et 
al., 1993; Oliver, 1993; Peter et al., 1993). As Peter et al. (1993) state, "[d]ifference 
scores (1) are typically less reliable than other measures, (2) may appear to demonstrate 
discriminant validity when this conclusion is not warranted, (3) may be only spuriously 
correlated to other measures since they typically do not discriminate from at least one of 
their components, and (4) may exhibit variance restriction. " Therefore, the use of 
difference scores may not be reliable, even when the reliability statistics suggest that the 
instrument is reliable. 



27 



Concerns about SERVQUAL's dimensionality have also surfaced in the literature. 
As mentioned previously, Parasuraman, Zeithaml, and Berry's development of 
SERVQUAl. resulted in a 22-item scale measuring five dimensions. However, several 
authors have reported results that demonstrate that SERVQUAL's five dimensions do not 
always generalize across service settings. Studies conducted by Babakus and Mangold 
(1992), Babakus and Boiler (1992), Brown et al. (1993), Carman (1990), Cronin and 
Taylor (1992), and Headley and Miller (1993) and all fail in some degree to replicate the 
original dimensions. 

The external validity (i.e., generalizability) of SERVQUAL has been questioned 
because of the evidence that the level of usefialness of the instrument "as is" may vary 
depending on the service. It has been shown that the 22-items do not necessarily load on 
the same factors (Babakus and Boiler, 1992; Brown et al., 1993; Carman, 1990, Headley 
and Miller, 1993; Taylor and Cronin, 1994). In addition, some researchers have suggested 
that significant wording changes are necessary so that the items are useful in a particular 
service setting (Babakus and Mangold, 1992, Carman, 1990). Furthermore, it may 
actually be necessary to modify the length the scale depending on the setting (Babakus and 
Boiler, 1992; Babakus and Mangold, 1992, Carman, 1990). 

Because of these problems, other approaches in measuring service quality have 
been suggested. One of these approaches is to use a perceptions-only scale for the 
measurement of service quality, such as the SERVPERF instrument tested by Cronin and 
Taylor (1992). Perceptions-only scales avoid the concerns about the use and reliability of 
difference scores, without sacrificing scale performance. Perception-only scales also have 
the advantage of being easier to administer, primarily because the subject does not have to 
answer both the expectation and performance question subsets. This advantage greatly 
enhances the practicality of the scale (Babakus and Boiler, 1992; Brown et al, 1993; and 
Cronin and Taylor, 1992, 1994; Headley and Miller, 1993; McAlexander, 1994). Zeithaml 
et al. (1996) recently recognized the value of perceptions-only scales such as SERVPERF. 



28 



They state, "The perceptions-only operationaUzation is appropriate if the primary purpose 
of measuring service quality is to attempt to explain the variance in some dependent 
construct..." (p. 40). 

SERVPERF essentially eliminates the expectation portion of the SERVQUAL 
scale and focuses entirely on service performance. Cronin and Taylor (1992) tested 
SERVPERF (i.e., perceptions-only) versus SERVQUAL (i.e., perceptions-minus- 
expectations). Four service industries were analyzed: banking, pest control, dry cleaning, 
and fast food. The results of the LISREL and oblique factor analysis procedures did not 
indicate that the dimensionality conformed to the five-factors proposed by Parasuraman et 
al., 1988. However, strong reliability scores were exhibited for all for industries 
(coefficient alphas greater than 0.800). Based on these results, and the failure of other 
studies to exactly replicate the five dimensions, Cronin and Taylor (1992) suggest that 
items in SERVQUAL (and hence SERVPERF) should be considered a uni-dimensional 
measure of service quality rather than a multi-dimensional measure. In addition, 
SERVPERF explained slightly more of the variation in perceived overall service quality, 
satisfaction, and purchase intention than SERVQUAL. 

Significant debate has occurred in the literature regarding Cronin and Taylor's 
(1992) performance only approach to measuring perceived service quality (Cronin and 
Taylor, 1994; Parasuraman et al, 1994). While, Parasuraman et al. (1994) concede that 
performance only measures such as SERVPERF tend to offer greater predictive power, 
they do not have as much diagnostic value as disconfirmation measures such as 
SERVQUAL In response, Cronin and Taylor (1994) suggest that the SERVPERF scale 
could be used as a summed or averaged service quality score that might be plotted over 
time. Therefore, a performance-only measure, such as SERVPERF, should be used when 
the objective is to obtain an overall measure of service quality that can be used as a 
dependent variable and analyzed over time. 



29 



A modified SERVPERP scale will be used in this research because it eliminates the 
disadvantages of using the difference score approach. In addition, the elimination of the 
expectation portion of the scale reduces the number of questions that the respondent is 
required to answer, which enhances ease of administration and should improve the 
response rate. Furthermore, one of the goals of the proposed research is to study the 
relationship between wait times and service quality. Since SERVPERF can be used as a 
summed interval score, this measure is more usefial than SERVQUAL for predictive 
purposes. 

Perceived Service Quality and Behavioral Intention 

Presumably, there are two reasons for measuring perceived service quality. First, 
so we can understand and improve the shortcomings of service delivery. Second, to 
understand the impact that service quality has on future behavior. As mentioned in the 
previous chapter, behavioral intentions, such as return intention and recommendation, are 
significant factors in maintaining an effective drug information service. Authors from the 
health care and other fields have studied the behavioral consequences of service quality. 

Babakus and Mangold (1989, 1992), Boulding et al. (1993), Cronin and Taylor 
(1992), Headley and Miller (1993), Parasuraman et al. (1991), and Zeithaml et al. (1996) 
all used modified versions of the SERVQUAL scale to measure service quality and its 
influence on future intentions in a wide variety of service settings. These studies indicated 
that perceived service quality was related to loyalty, switching intention, complaining, 
compliment and recommendation intention, and return intention. Dube'-Rioux et al. 
(1989) and Bitner (1990) used ahernative instruments to look at the issue of service 
quality and future intent. These two studies used role-playing methodologies involving 
restaurant and air travel delays, respectively. The results of these studies were consistent 



30 



with those using the modified SERVQUAL instruments, where service quality was related 
to intended future behavior. 

Summary of the Literature 

Queuing theory describes service capacity as a function of the arrival rate (k), the 
service rate (|a), the number of servers (s). There are typically six options available for 
reducing waiting times and delays in a system: (1) add servers, (2) improve the service 
rate, (3) increase queue size, (4) change the arrival rate, (5) reduce the variance in service 
times or interarrival times, and (6) change the queue discipline. There are at least two 
methods of building the relationship between waiting time and perceived service quality 
into a simulation model: (1) operationaHzing service quality as a measurable variable, and 
(2) estimating the relationship as a probability distribution. 

Perceived service quality is a concept that is still evolving. Currently, 
SERVQUAL is perhaps the most widely used instrument to measure perceived service 
quality. The SERVQUAL scale consists of 22 item pairs measuring five dimensions of 
service quality: (1) tangibles, (2) reliability, (3) responsiveness, (4) assurance, and (5) 
empathy. Recently, however, it has received criticism from other researchers involving 
three general areas of concern: (1) SERVQUAL's use of difference scores, (2) the 
dimensionality of SERVQUAL, and (3) SERVQUAL's external validity. 

SERVPERF is a scale that avoids many of the concerns listed above by 
ascertaining only consumer perceptions (as opposed to expectations and perceptions) 
regarding the five dimensions of service quality listed above. Arguments have also been 
made that SERVPERF exhibits stronger reliability and validity than SERVQUAL. 
SERVPERF also has the advantage of being easier to administer, thus enhancing the 
practicality of the scale. 



31 



The literature also suggests that perceived service quality is related to fiiture 
consumer behavior, such as return intention and intent to recommend. In addition, there is 
evidence to hypothesize that evaluations of service quality are affected by waiting time and 
delays in service. Furthermore, it appears that management tends to overestimate 
acceptable waits and customers tend to overestimate actual waits. 

Chapter three will use the information presented in this literature review to 
construct a framework for the variables to be studied. Hypotheses and specific research 
questions will then be developed based on this framework. Chapter four will present the 
methods used to test these hypotheses and specific research questions. 



CHAPTER 3 
RESEARCH FRAMEWORK AND HYPOTHESES 

Research Framework 

The previous two chapters have described the potential relationships among 
service capacity, wait times, perceived service quality, and behavioral intention. Possible 
changes for improving the performance of service systems, from a queuing theory 
standpoint were also suggested. This chapter builds on the concepts presented in the 
introduction and the literature review by presenting a research framework for the variables 
used in this study. 

There are four primary relationships necessary to understanding the research 
framework for this study. First, service times can be described using a queuing paradigm 
as a fonction of the arrival rate, the service rate, the number of servers, and the priority 
discipline. Important output variables of the system would include the expected service 
times and queue waits, the percentage of service delays, information regarding number in 
the system and queue lengths, and the utilization rates of the servers. Second, waitmg 
times and service delays are related to consumer perceptions of service quality, and these 
perceptions are related to future behavioral intention. However, consumers may not 
accurately estimate actual waiting times; therefore, perceived waiting time may be a more 
important variable. Third, by creating and manipulating a valid simulation of the service 
system (i.e., a computer model of the actual system), we can propose changes in the 
system that will decrease the amount of time a consumer waits for service to be 
completed. Fourth, service capacity might be optimized by defining a mathematical 
relationship between perceived service quality and service times or service delays. 



32 



33 



Definitions of the concepts used in the theoretical framework for this study (Figure 
3-1) are discussed below. Where appHcable, the first definition refers to the concept as it 
applies to the empirical measurement and the second definition refers to the concept as it 
applies to the simulation. 

Arrival Rate (A,): (1) The empirically observed interarrival distribution of consumer 
information requests and questions in the Drug Information Service (DIS). (2) The 
probability distribution input into the simulation model to predict the interarrival 
distribution of consumer information requests and questions into the DIS. 

Actual Service Time: The total time required to research an answer to the question, 
obtain an approval, and return the answer to the caller that was empirically observed for 
service process in the DIS. Service time is also referred to as waiting time or response 
time. 

Behavioral Intention: A subject's assessment regarding their future intentions regarding 
the service. More specifically, it relates to whether or not the subject intends to use the 
service again or recommend the service to a colleague. 

Expected Number in System (L): A simulation output variable that indicates the average 
number of uncompleted information requests and questions in the DIS. 

Expected Queue Length (Lq): A simulation output variable that indicates the average 
number of informafion requests and questions in the DIS that have not yet started the 
research process. 



34 



Expected Time in Queue (Wq): A simulation output variable that indicates the average 
amount of time that questions must wait in the queue before starting the research process. 

Expected Time in System (W): A simulation output variable that indicates the average 
total amount of time that a question or information request spends in the system. 



■j Expected Utilization Rate (p): A simulation output variable indicating the average 

ill 

|:I percentage of time that servers were busy. 



Overall Service Quality (OSQ): A subject's overall perception of the service quality of 
the drug information service. 

Perceived Service Quality (PSQ): A subject's evaluation of the service quality of the 
drug information service based on the items in the SERVPERF instrument. 

Perceived Service Time: A subject's perceptions regarding the response time of the drug 
information service. More specifically, it relates to perceptions regarding (1) the 
acceptability of the response time, (2) the usefulness of the answer once the response was 
received, (3) the subject's desire for quicker responses from the DIS, and (4) whether the 
response time was shorter, equal, or longer than expected. 

Queue Discipline: (I) The method currently used in the drug information service to 
prioritize consumers for service, as described by the service providers during personal 
interviews. (2) A simulation input used to describe the way in which servers decide the 
order in which information requests and questions are handled by the drug information 
service. 



35 



Service Delay: A state indicating whether the actual service time was longer than the 
response time needed by the caller 

Service Rate (|a): (1) The empirically observed service time distributions of consumer 
information requests and questions in the Drug Information Service (DIS). (2) The 
probability distribution input into the simulation model to predict the service times for the 
steps in the service process in the DIS. 

Staffing Level (s): (1) The observed number of individuals available to serve consumers 
and their roles in handling consumer information requests and questions. (2) A simulation 
input describing the number of individuals available to handle service requests during the 
various steps of the service process. 

Research Hypothesis and Specific Research Questions 

The hterature has proposed that significant positive relationships should exist 

among measures of PSQ, OSQ, and intended future behavior (Boulding et al., 1993; 

Cronin and Taylor, 1992; Parasuraman et al., 1988). Hypothesis one (HI) assesses the 

relationship between PSQ and OSQ. Hypotheses two (H2) and three (H3) are aimed at 

ascertaining the strength of the relationship between PSQ and behavioral intention (i.e., 

intent to call again and intent to recommend service), and between OSQ and behavioral 

intention. 

HI: There is a positive relationship between evaluations of perceived service quality 
(PSQ) and evaluations of overall service quality (OSQ). 

H2a: Intention to use the service in the future is positively associated with evaluations 
of perceived service quality (PSQ). 



36 



H2b: Intention to recommend the service to a colleague is positively associated with 
evaluations of perceived service quality (PSQ). 

H3a: Intention to use the service in the future is positively associated with evaluations 
of overall service quality (OSO). 

H3b: Intention to recommend the service to a colleague is positively associated with 
evaluations of overall service quality (OSQ). 



Previous research has indicated the powerful role of customer perceptions on 
evaluations of perceived service quality, including perceptions regarding service time. It 
has been shown that consumers often cannot accurately ascertain the amount of time 
within which the service was completed (Katz, Larson, and Larson, 1991). If consumers 
cannot accurately evaluate the actual service time, then perceived time may be a more 
important predictor of perceived service quality than actual time. For instance, Tom and 
Lucey (1995) found that customers were more satisfied with the service when the wait 
was shorter than expected than when the wait was longer than expected. Hypothesis four 
(H4) is aimed at gaining more information regarding how callers' perceptions regarding 
the response time of the service are related to their attitudes regarding PSQ. 



H4a: Acceptability of the response time of the service is positively associated with 
evaluations of perceived service quality (PSQ). 

H4h: Perceived usefulness of the information once the response was received is 
positively associated with evaluations of perceived service quality (PSQ). 

H4c: Perceived quickness of response is positively associated with evaluations of 
perceived service quality (PSQ). 

H4d: Deviations from expected response times are positively associated with 
evaluations of perceived service quality (PSQ). 



37 

As reported in chapter 2, the results of Bolton and Drew (1994), Clemmer and 
Schneider (1989a), Davis (1991), Hui and Tse (1996), and Katz, Larson, and Larson 
(1991) have all reported evidence to support that an inverse relationship exists between 
evaluations of the service (i.e., perceived service quality or satisfaction) and waiting time. 
Additionally, it has been suggested that the relationship between waiting times and 
perceived service quality may be non-linear (Davis, 1991; Larson, 1987). Hypothesis five 
(H5) evaluates the relationship between service times and evaluations of perceived service 
quality for generalizability to the drug information setting. 



H5a: There is a significant inverse relationship between evaluations of perceived 
service quality (PSQ) and actual service time. 

H5b: There is a non-linear relationship between actual service time and perceived 
service quality (PSQ). 



The literature often does not make a distinction between waiting times and delays 
in service; however, both have been shown to effect perceived service quality. A delay 
occurs whenever the actual response time is longer than the promised time. Taylor 
(1994a) and Taylor and Claxton (1994) reported that subjects who experienced boarding 
delays in an airport setting had lower evaluations of overall service than non-delayed 
subjects. However, this effect may be moderated by the degree of filled time and 
consumers' perceptions regarding how much control the service provider had over the 
delay. Similarly, Dube'-Rioux (1989), using role-playing scenarios for a restaurant setting, 
also found that delays could affect customer satisfaction; however, the research suggested 
that perceived need and timing of the delays were also important. Based on this research. 



38 

hypothesis six (H6) assesses the association between delays in service and evaluations of 
PSQ. 



H6: Delays in service are negatively related to evaluations of perceived service 
quality (PSQ). 



Additionally, there is little information in the service quality or satisfaction 
literature concerning the relationship between actual and perceived service time; therefore, 
it is unclear to what degree they are actually associated. However, several authors do 
suggest that perceived service time may be more important that actual service time in 
relation to perceived service quality (Hornik, 1982, 1984; Katz, Larson, and Larson, 1991; 
Taylor, 1994a). Hypothesis seven (H7) explores the associations among the variables 
measuring perceived service time, actual service time, and service delays. 



H7a: There is a positive relationship between actual service time and perceived service 
time. 

H7b: There is a positive relationship between service delays and perceived service time. 



The previous chapter discussed ways in which the performance of a system could 
be improved. It is believed that by changing service capacity in ways that decrease the 
overall service times and the number of service delays, consumer evaluations of perceived 
service quality may be improved. Using a queuing theory framework to optimize service 
capacity suggests that we should examine the effects of changes in staffing levels (s) and 
service rates {\x). Specific research questions one through three (Rl, R2, and R3) assess 
the relative impact and sensitivity of these methods in terms of improvements in service 
times and service delays. 



39 



Rl: How do changes in staffing levels and service rales impact simulated service 
times in the drug information service? 

R2: What combination of changes in staffing levels, service rates optimizes the system 
for delays in service when compared against service quality and cost in the drug 
information service? 

R3: How sensitive is this solution to random variation in the system variables (e.g., 
arrival rate) ? 



40 



INPUTS 




1. Acceptability of Service Time 

2, Response Still Useful 

3, Quicker Response 

4. Expected Service Time 



NOTES: 

1 . Arrows relating to study hypotheses and specific research questions are labelled with the 
number of the hypothesis (H) or research question (R). 

2. Arrow direction does not necessarily imply causality. 



Figure 3-1. Hypothesized Framework 



CHAPTER 4 
METHODS 

Overview 

This chapter describes the methods used to test the research hypotheses and 
explore the specific research questions presented in the previous chapter. It includes a 
description of the study location, the sources of data, the sample selection procedures 
used in the study, and methods of data collection. In addition, it describes the techniques 
used to develop and validate the service quality questionnaire and simulation program. It 
concludes with a description of the data analysis procedures used to test the hypotheses 
and research questions. 

Study Location 

This study was conducted at the Drug Information and Pharmacy Resource Center 
(DIPRC) at Shands at the University of Florida in Gainesville, Florida. The DIPRC 
accepts drug information questions from practitioners from all over the North Florida 
region. The DIPRC accepts calls only from practitioners (e.g. pharmacists, physicians, 
nurses, law enforcement, etc.). Calls from the public are redirected to other resources. 
The DIPRC categorizes callers into three categories: (1) subscribers, (2) non-subscribers, 
and (3) University of Florida Heahh System employees. The service is provided free of 
charge to all callers; however, subscribers pay a voluntary membership fee to help support 
the DIPRC. 

Questions are presented to the DIPRC from various sources, including telephone, 
facsimile (fax), electronic mail, and through in-person visits to the center. However, the 

41 



42 



vast majority of calls are presented to the center via telephone. The DIPRC classifies 
questions into 14 general categories: (1) drug availability, (2) drug dosage and 
administration, (3) drug identification, (4) drug interactions, (5) drug therapy and efficacy, 
(6) drug use in pregnancy or lactation, (7) investigational drugs, (8) IV compatibility or 
stability, (9) legal, (10) other, (11) pharmacokinetics, (12) side effects or adverse effects, 
(13) toxicology, and (14) veterinary drugs. 

Besides the nature of the question, subscription status, and profession, callers are 
asked to provide various demographic information such as name, address, phone number 
and/or fax number. Callers are also asked for the amount time that they can allow the 
center to research the question. The DIPRC categorizes these times into four categories: j 

(1) within 15 minutes (stat), (2) within the day (today), (3) by a specific date (date), and [ 

(4) no rush. Callers may request an oral response to their questions, a written response, or [ 

both. 

The DIPRC is usually staffed by three Pharm.D. students during their clerkship 
rotations. Sometimes internship students, visiting international students, and hospital 
pharmacy residents will assist the DIPRC in answering questions. However, the DIPRC 
does not usually know when additional students or residents will become available. In 
addition, the level of contribution that they provide is often Hmited and unpredictable. 

A drug information resident is also usually present in the DIPRC; however, this 
resident has varying duties and does not usually spend their time directly answering 
questions. Two co-directors manage the service's operation and approve student 
responses to information requests. The drug information resident can also approve 
student responses once he or she is qualified and experienced, as determined by the co- 
directors. 



43 

Data Sources. Sample Selection, and Data Collection Procedures 

The data for the empirical parts of this study were obtained by non-experimental 
methods. The parts of this study involving computer simulation were experimental in 
design. Data for this study was collected from six sources. The first two were taken from 
historical data sources (i.e., historical data sheets and a database documenting the past 
monthly workload). The third, fourth, and fifth data sources were collected concurrently 
during the data collection period from June, 1997 through July, 1997 (i.e., specially 
designed data collection forms, personal interviews, and service quality questionnaires). 
The last source of data was obtained from computer runs of the simulation program. 

Historical Data Sheets 

The historical data sheets are the standard forms that the DIPRC uses to document 
responses to caller questions and information requests (Appendix H). Every question 
answered by the DIPRC is recorded onto one of these data sheets. In addition, all 
information regarding a particular call is written on or attached to these data sheets. All of 
the archived data sheets were retrieved for September, 1996 through May, 1997 
(approximately nine months) resulting in a total historical data sheet sample size of 2,385. 

Information taken from the historical data sheets included the file number and the 
requestor's name, profession, and subscription status. In addition, the question type, when 
the response was needed, the response type requested, the date and time received, and the 
date and time completed were recorded. From the data sheets, it was also possible to 
determine whether or not the service was delayed past the time needed, and whether or 
not the same person who answered the call also completed the answer. All data were 
entered into a Microsofi; Excel database. 



44 

Historical Database 

A historical database maintained by one of the co-directors of the DIPRC 
documenting the monthly workload since 1987 was used to analyze monthly arrivals for 
seasonal trends. The data consisted of a total of 126 data points (i.e., 11 each for January 
through June and 10 each for July through December). This database contained the total 
number of questions answered each month as well as the average daily number of 
questions answered for the month. The average daily number of questions was produced 
by dividing the total number of questions answered during the month by the number of 
days the DIPRC was open to take calls. This data was made available to the principal 
investigator in spreadsheet format. 

Data Collecton Forms 

A special form was designed to facilitate the specific data collection needs of this 
study. This form was very similar to historical data sheets described above, however, 
modifications were made to incorporate space for additional information, such as the 
recording of dates and times for specific service activities (Appendix I). These data 
collection forms were collected several times each week from June, 1997 through July, 
1997, resulting in a total sample of 526 forms. By comparing the file numbers for the data 
collection forms obtained against a separate entry log, it was determined that 1 6 data 
collection forms were missing and could not be located. Therefore, over 97% of the data 
collection forms filled out during this period were located and entered. 

Information taken from the data collection forms included: 

1 . The file number. 

2. The requestor's name and contact information., the requestor's profession, and 
the requestor's subscription status. 

3. The question type. 



45 



4. The time the response was needed and the response type requested. 

5. The date and times for each of the four work activities (i.e., take call, research 
answer, approve answer, and return answer to caller). 

6. Whether or not the service was delayed past the time needed. 

7. The persons approving and completing the answer. 

8. The number of persons working on the question. 

In addition to the information mentioned above available from analyzing the 
historical data sheets, the activity based service times on the data collection forms allowed 
for a more detailed analysis of the service processes of the DIPRC. This provided \ 

information necessary to the development of a more robust simulation. All data were I 

entered into a special Microsoft Access database specifically designed for this project. i 

Personal Interviews 

Personal interviews with DIPRC co-directors and externship students were 
conducted in order to obtain a more thorough understanding of the various aspects of the 
center's operations, including the process for answering questions and the priority 
discipline used by the students to organize work. Twelve personal interviews were 
conducted from May, 1997 to July, 1997. The interviewees consisted of ten Pharm.D. 
students and the two co-directors for the DIPRC. The students were all interviewed 
during their third rotation week. The interviews were semi-structured, using interview 
outlines to maintain consistency (Appendices J and K). The interviews were audio 
recorded for accuracy of recall. The interview outlines were developed based on 
suggestions made by Stewart and Cash (1988). 



46 
Service Quality Questionnaires 

The fourth data source used was a service quality questionnaire sent to 
practitioners using the service from June, 1997 through July, 1997. This questionnaire 
was administered to callers after service completion (Appendix M). This questionnaire 
assessed perceived service quality (PSQ) using a modified SERVPERF scale, perceived 
service time, perceived overall service quality (QSQ), and behavioral intention. The 
inclusion criterion for receiving a questionnaire was any practitioner who submitted a drug 
information question to the DIPRC during the study period. Callers were excluded from 
this portion of the study if they had already been sent a questionnaire (i.e., callers were not 
surveyed more than once). Qut of the available 526 samples, 332 questionnaires were sent 
out to practitioners, 183 samples were repeat callers who had already been sent a 
questionnaire, and contact information was not available for 11 of the callers. 

A data collection procedure based on the "Total Design Method" developed by 
Dillman (1978, 1994) was used to maximize the response rate of the questionnaire. There 
were three phases to this procedure. First, eligible subjects identified from the data 
collection forms were sent a questionnaire (Appendix M) along with a cover letter from 
the co-director and the principal investigator explaining the purpose of the research and 
asking for the subjects' participation (Appendix L). A self addressed, stamped envelope 
was also enclosed for the subject to return the survey. Second, approximately one week 
after the questionnaires were mailed, a reminder post card (Appendix N) was sent to non- 
responders asking subjects to fill-out and send in the questionnaire or to contact the 
principal investigator if they did not receive a questionnaire. Subjects who replied to the 
post card saying that they never received a questionnaire were promptly sent another. 
Third, approximately two weeks after the first questionnaire was sent, an attempt was 
made to contact each non-responding subject by telephone. If the subject was reached, 
the interviewer attempted to ascertain the reason for the non-response, and, if still 



47 



applicable, the subject was reminded to send in the questionnaire. If the subject was not 
reached, an attempt was made to leave a message or reminder through a receptionist or 
co-worker. 

All of the pre-test and main questionnaires were developed using the Survey Pro 
for Windows software program. This program also provides a facility for data entry and 
export which was used to record the responses to the questionnaires for future analysis. 
Responses to the final question (i.e., Q35) asking for additional comments were recorded 
into a word processing document along with all other comments written next to the 
individual items. 

Simulation Runs 

The sixth data source used was the output generated by the simulation program. 
The simulation model was constructed using the GPSS/H (General Purpose Simulation 
System) simulation language produced by Wolverine Software for use on an MS-DOS 
based personal computer. Results from the simulation runs were entered into a database 
for the purposes of summarization and statistical analysis. Five separate groups of 
simulation runs were made. The first three runs were used to verify and validate the 
simulation program. The fourth run was used to test specific research questions one and 
two. The final run was used to determine the sensitivity of the optimal solution as 
described by the third specific research question. 

Procedures for Protecting Privacy and Confidentiality 

Each record of information gathered from the historical data sheets or concurrent 
data collection forms was coded with an identification number. For tracking purposes 
only, surveys sent out using information collected from the data collection forms were also 
coded with an identification number. Once all mail questionnaires were returned, all post 



48 



card and telephone follow-ups completed, and all data entered and verified, the portion of 
the database containing the callers' contact information was deleted. Furthermore, written 
comments transcribed from the questionnaires were edited to exclude any personal 
references. All historical data sheets were returned to the DIPRC once data entry was 
completed. Hard copies kept of the data collection forms were coded with the 
identification number and personal information contained on these hard copies was 
masked using permanent marker. In addition, any comments or quotes used from the 
personal interviews were edited so that they could not be traced to the speaker. This 
project was reviewed by the Health Center Institutional Review Board at the University of 
Florida and approved on July 29, 1997. 

Sample Size Calculations 

Required Number of Data Sheets 

In order to estimate the sample sizes necessary to provide accurate estimates of the 
arrival and service time distributions required to construct the simulation of the DIPRC, 
the data sheets from approximately the first week of data collection were compiled and 
analyzed. The necessary sample sizes were calculated using the method proposed by 
Mendenhall, Wackerly, and Scheaflfer (1990) for establishing a large sample 95% 
confidence interval for a given standard deviation and error of estimation. The sample size 
calculations are summarized in Table 4-1, 

The largest of these estimates is the 335 samples required for a 95% confidence 
interval using a standard deviation of 45.6 minutes and an error of estimation of 5 minutes. 
Since the DIPRC receives between 250 to 300 calls per month, two months of data were 
collected for this study in order to satisfy this sample size requirement. This time period 
resulted in an actual sample of 526 data sheets, which provided a conservatively large 



49 



sample from which to estimate the required parameters and allow considerations for 
missing data. 



Table 4-1. Required Sample Sizes for Selected System Parameters 











Req 


uired 








Error of 


Sample Size 


Parameter 


Mean 


St. Dev. 


Estimation 


(a= 


0.05) 


Intemrrivals 


45 min. 


43 mm. 


5 min. 




296 


Reception of Call 


4.1 min. 


3.7 mm. 


1 mm. 




55 


Service Time 


42.5 mm. 


45.6 min. 


5 min. 




335* 


Approval 


1.9 min. 


3.7 mm. 


1 min. 




54 


Return Answer 


5.9 mm. 


8.8 mm. 


1 min. 




310 



Largest sample size required. 



Required Number of Questionnaires 

The primary statistical methods used in testing relationships involving the items in 
the service quality instrument were correlation (Hlthrough H4, H6, and H7) and linear 
regression (H5). As such, there were two considerations driving the sample size 
determination for the service quality questionnaires. Primary consideration was given to 
the hypotheses to be tested. Secondary consideration was given to the required number of 
data points necessary to conduct principal components factor analysis on the SERVPERF 
portion of the survey. As recommended by Sawyer and Ball (1981), the type I error rate 
for sample size calculations was set at a=0.05, and the type 11 error rate was set at (3=0.20 
(where power equals 0.80). The rationale for these error rates is based on the assumption 
that, for this study, committing a type I error was more crhical than committing a type 11 
error. Therefore, a and (5 were selected so that only a small chance existed that the null 
hypothesis would be rejected when no true differences exist and a reasonably high 
probability of rejecting the null hypothesis when differences do exist 

First, HI through H4 and H6 through H7 used correlation as the primary statistic 
to detect significant associations among the study variables. The software program "PC- 
SIZE" (Dallal, 1986) was used to estimate the required sample sizes for the correlation. 



50 



All algorithms used in the program are based on published statistical literature (see Dallal 
(1986) for references). The results produced from the program itself were verified by 
Dallal (1986) against a selection of entries from tables presented in Cohen (1977), Fleiss 
(1981), and Odeh and Fox (1975). The results from the pre-test and reports of similar 
correlations in the literature suggested that correlations as small as 0.25 may be 
significant. In order to detect a statistically significant correlation of at least 0.25 with a 
power of 0.80 and a level of 0.05, a sample size of at least 123 surveys is necessary. 

Second, rules-of-thumb for conducting factor analysis generally suggest sample 
sizes of at least 100 or between 5 and 10 samples per variable to be analyzed, whichever is 
greater. The actual number of samples needed depends of the amount of variability 
explained by the factors, the strength of the factor loadings, and the communalities of the 
individual variables (Crocker and Algina 1986; Stevens 1996). The factor analysis for the 
pre-test was conducted with an average of 4.65 samples per variable. Six rotated factors 
explained 71 percent of the variance, and nearly all of the items loaded strongly on one of 
the six factors. Furthermore, all but two of the items had communalities greater than 0.6. 
This evidence suggests that five samples per variable is adequate to factor analyze the 
SERVPERF portion of the questionnaire. Since there are 20 variables to be analyzed, 
then 20 times 5 equals a minimum sample size of 100. 

Third, H5 used regression analysis to examine the relationship between service 
time (in minutes) and PSQ. Determining the number of samples required for regression 
analysis is complex since statistical power for this method is a flinction of both the number 
of predictors used in explaining the variance in the dependent variable and the effect size 
(as measured by R^) that the researcher wants to detect (Green 1991). However, S.B. 
Green (1991) has developed a two-step methodology for estimating sample sizes for 
regression purposes that compares favorably with Cohen's (1988) more complicated 
procedures for muhiple regression power analysis. For a statistical power of at least 0.80, 
the minimum sample size necessary is L divided by/ (i.e., N= L if). Where L = 6.4 + 



51 



1 ,65m - 0.05m and/ = R^/(l - R^), where m is the number of predictors to be used in the 
model (Green 1991, p. 504). To detect a significant relationship between service time and 
PSQ with an R of at least 0.10 (i.e., small effect) using simple linear regression (i.e., one 
predictor), results in an L = 8 and an / =11. Using these numbers in the equation for N 
presented above results in a required minimum sample size of 72. 

Therefore, the minimum number of questionnaires needed to conduct the 
hypothesis tests with sufficient power was 123. The pre-test achieved a response rate of 
approximately 67% for a sample of 201 . Based on the pre-test response rate and 
discussions with the DIPRC co-directors, a minimum response rate of 50% for the main 
questionnaire was reasonably anticipated. Thus, a sample of at least 246 callers was 
calculated as the necessary sample size for the questionnaire portion of this project. 
During two months of data collection a sample of 332 callers was identified and sent 
questionnaires, which was considered sufficient for the purposes of this study. 

Study Variables 

The previous chapter introduced definitions for the variables used in this study as 
presented in the hypothesized research framework. This section discusses how these 
variables were measured using the data sources described above. 

Arrival Rate (A,): Interarrival times were estimated by arranging the arrival times from the 
historical data sheets (Appendix H) and data collection forms (Appendix I) in ascending 
order by date and time of arrival. The interarrival time was obtained by subtracting the 
arrival time for the previous record from the time of the current arrival. For example, if 
the current record has an arrival time of 12:00 p.m. and the previous arrival occurred at 
1 1 :30 a.m., then the interarrival time was 30 minutes. Only interarrivals within each day 
were estimated. 



52 



Actual Service Time: The actual service time refers to the amount of time required to 
respond to a question. This was obtained by subtracting the "End Time" for receiving a 
call from the "Start Time" for returning an answer to a caller. These data were taken from 
entries made on the data collection forms (Appendix I). 

Behavioral Intention: Behavioral intention was measured by two items (items 33 and 34 
in Appendix M). The first item measured was worded, "I intend to use this service m the 
future." The second item measured was worded, "I would recommend this service to a 
colleague." Each of these perceptions was measured on a 7-point scale with a one 
representing "Strongly Agree" and a seven representing "Strongly Disagree". Thus, 
lower scores mdicate greater intention. 

Expected Number in System (L): A simulation output variable that indicates the average 
number of uncompleted information requests in the system. This value was obtained from 
the queue reports generated by GPSS/H for the queue labeled "TOTALQ", which collects 
queuing and service time information relating to the entire service process. 

Expected Queue Length (Lq): A simulation output variable that indicates the average 
number of questions in the system that have not yet started the research process. This 
value was obtained from the queue reports generated by GPSS/H for the queue labeled 
"BOARDQ", which collects queuing and service time information relating just to the 
period in which questions spend in the queue before starting the research process. 

Expected Time in Queue (Wq): A simulation output variable that indicates the average 
amount of time that an information requests or questions must wait in the queue before 
starting the research process. This value was obtained from the queue reports generated 
by GPSS/H for the queue labeled "BOARDQ". 



53 



Expected Time in System (W): A simulation output variable that indicates the average 
total amount of time that a question or information request spends in the system. This 
value was obtained from the queue reports generated by GPSS/H for the queue labeled 
"TOTALQ". 

Expected Utilization Rate (p): A simulation output variable indicating the average 
percentage of time that servers are busy. This was obtained from the facility reports 
generated by GPSS/H, which reports utilization as the percentage of the total time that the 
facilities (i.e., students) were captured (i.e., utilized). Utilization rates for each of the 
simulated students were averaged together to obtain an expected overall utihzation rate. 

Overall Service Quality (OSQ): A subject's overall evaluation of the service quality of 
the drug information service. This perception was evaluated using a single-item measured 
on a 6-point scale (item 28 in Appendix M). The item was worded "The overall quality of 
the services provided by the DIPRC is best described as." An "Excellent" rating was 
scored as a one and an "Unacceptable" rating was scored as a six. Thus, lower scores 
indicated higher perceived OSQ. 

Perceived Service Quality (PSQ): A subject's evaluation of the service quality of the 
drug information service was measured by summing the individual items of the 
SERVPERF instrument to obtain an overall perceived service quality score. During the 
main phase of the study, 20 items initially composed the PSQ scale (numbers four through 
twenty-three in Appendix M). These items were measured on a seven-point scale anchored 
with "Strongly Agree" (receiving a value of one) and "Strongly Disagree" (receiving a 
value of seven). Negatively worded items were reversed scored. One item was dropped 
from the scale, resulting in a final measure composed of 19 items. Therefore, PSQ had a 
possible range of 19 to 133 points, with lower scores indicating higher perceived quality. 



54 



Perceived Service Time: A subject's perceptions regarding the response time of the drug 
information service. This variable was measured by four questionnaire items. The first 
was question 24 "Acceptable Time". This item was worded, "The amount of time that it 
took the DIPRC to respond to my most recent question was acceptable." The second was 
question 25 "No Longer Usefiil". This item was worded, "By the time I received a 
response from the DIPRC, the information was no longer usefiil to me." The third was 
question 26 "Quicker Response". This item was worded, "I wish the DIPRC could 
provide a quicker response to my questions." All three of these items were scored on a 
seven-point scale anchored with "Strongly Agree" (receiving a value of one) and "Strongly 
Disagree" (receiving a value of seven). The fourth item was worded "The amount of time 
that it took the DIPRC to respond to my most recent question was." This item was also 
scaled on a seven-point scale; however, it was anchored by "Much Shorter than Expected" 
(receiving a value of one) and "Much Longer than Expected" (receiving a value of seven). 

Queue Discipline: The queue discipline is the method currently used in the DIPRC to 
prioritize questions as they arrive. This discipline was described by the students and co- 
directors during personal interviews. The queue discipline used by the simulation model 
was developed using these descriptions. 

Service Delay: This is a dichotomous variable measured by comparing the observed 
service time with the response time needed as reported on the data collection forms and 
historical data sheets. If the service time was longer than the time needed then the variable 
was coded with a value of one indicating "Delayed". If the service time was shorter than 
the time needed then the variable was coded with a value of zero indicating "Not delayed". 



55 



Service Rate (\i): The probability distribution that is input into the simulation model to 
describe the service times for the steps in the service process in the DIS. These probability 
distributions were estimated from the time information obtained from the historical data 
sheets and concurrent data collection forms discussed above. 

Staffing Level (s): A simulation input describing the number of individuals available to 
handle service requests. Under "Normal Operation" this variable had a value of three, 
indicating that three students were responsible for staffing the DIPRC. This value was 
varied from one to five in the simulation model to assess the impact of changes in staffing 
levels on the results of the simulation model. 

Questionnaire Development and Vahdation 

As described in the literature review, service quality instruments based on the items 
in the SERVQUAL scale are in wide use. Therefore, the consensus among those using 
the scale appears to be, generally, that the items composing the scale are an adequate 
representation of perceived service quality, barring modifications necessary for 
applicability. Furthermore, the procedures used by Parasuraman, Zeithaml, and Berry 
(1985, 1988) to develop the individual items appear to be well-supported (Cronin and 
Taylor, 1992; Oliver 1993). Therefore, the question of whether the items in the 
SERVPERF instrument actually measure the construct of perceived service quality is not 
at issue in this research. However, because the SERVPERF instrument was modified for 
use in the drug information setting, it was necessary to validate the content of the 
questionnaire, explore some general aspects of construct validity, establish internal 
consistency, and report the potential for non-response bias. 



56 

Content Validity 

Content validity is an assessment of whether the items in a scale adequately 
measure the construct of interest (Crocker and Algina, 1986). Three methods were used 
to establish the content validity of the instrument. First, as suggested by Crocker and 
Algina (1 986) and DeVeUis (1991), an expert panel was asked to evaluate the service 
quality instrument used in this study for clarity and readability. In addition, the panel was 
also asked to assess whether or not the instrument covered all of the domains that they 
beheved were important in measuring the quality of a drug information service. This panel 
consisted of 

1 Two co-directors of the Drug Information and Pharmacy Resource Center at Shands 
and the University of Florida. 

2. The director and employees of the Arkansas Poison and Drug Information Center at 
the University of Arkansas for Medical Sciences. 

3. Three recent callers to the DIPRC identified from the data sheets (2 pharmacists and 1 
nurse). 

4. Three senior graduate students with pharmacy degrees as well as experience and 
educational background in survey design and methodology. 

5. Two professors experienced in survey research and familiar with the literature 
involving perceived service quality. 

The recent callers and the graduate students were asked to complete the 
instrument and report the length of time spent on the questionnaire. From these six 
completed questionnaires, it was determined that the typical time for completion of the 
questionnaire is between 5-10 mmutes, depending on the number of written comments. 



57 



Changes were made to the initial instalment based on the recommendations of the 
panel. These recommendations were primarily related to clarity and item order issues, 
however, five items were added to the initial instrument based on the panel's comments: 
(1) a question concerning calling fi-equency (Item 3), (2) a question related to the need for 
written supporting documents (Item 31), (3) a question involving the service's relation to 
patient outcomes (Item 32), and (4) and two questions concerning the usefulness of the 
service (Items 29 and 30), 

Second, the original version of the SERVPERF scale did not include a response 
option allowing subjects to differentiate between items for which they had no opinion or 
no experience versus items for which they actually had neutral feelings. Survey research 
conducted by Kippen, Strasser, and Joshi (1997) reported differences in response patterns 
for subjects that had "No Opinion" and "No Experience" response categories versus 
subjects that were forced to choose a response. The pre-test was conducted using two 
versions of the questionnaire. The first version forced the subjects to choose a response 
from a seven-point scale (Appendix E). The second version included an eighth scale 
option allowing subjects to select "Don't Know" for questions that they did not have an 
opinion or they felt were not applicable (Appendix F). The response patterns between 
these two versions were used to select out items that were not applicable to the drug 
information setting. 

Third, comments were often written next to individual items and in response to the 
final question of the questionnaire (Appendices G and P). These comments were used in 
combination with the other validation techniques to help decide if individual items were 
applicable to the drug information setting. 



58 
Construct Validity 

Construct validity is concerned with the theoretical relationships between variables 
to the extent that the measures used to represent variables behave as expected in relation 
to other measures (Crocker and Algina, 1986). Two issues relevant to the construct 
validity of the questionnaire were examined. 

First, the dimensionality of the SERVPERF portion of the questionnaire was 
explored. Parasuraman et al. (1985, 1988) have described perceived service quality as a 
multidimensional construct covering five separate dimensions: tangibles, reliability, 
responsiveness, empathy, and assurance. However, as described in the literature review, 
other researchers have had trouble duplicating the original five dimensions. Furthermore, 
researchers have found that items do not always load on the same dimension. Cronin and 
Taylor (1992) suggested that the perceived service quality should not be considered as a 
multi-dimensional construct, but instead a uni-dimensional construct. If SERVPERF can 
be considered as a multi-dimensional construct then it may be important to examine the 
differential effects of service time and delays on these dimensions as well as the overall 
instrument. 

This research explored the dimensionality of the SERVPERF portion of the service 
quality questionnaire using principal components factor analysis with a Varimax (i.e., 
orthogonal) rotation. The number of components to retain was decided using the Kaiser 
criterion, such that components with eigenvalues of 1 .00 or higher were retained, and 
components with less than 1 .00 were excluded (Stevens, 1 996). Items were assigned 
according to their factor loadings. Items with loadings less than 0.40 were rejected 
(DeVelhs, 1991; Stephens, 1996). 

Second, the correlations among PSQ, OSQ, and behavioral intention obtained 
from this questionnaire were compared with similar correlations obtained by Cronin and 
Taylor (1992). In other words, the question was asked, "Do the variables behave as 



59 



expected?" This step is important due to the changes made to SERVPERF for use in the 
drug information setting. Using these comparisons, it was possible to see if the variables 
used in this research performed consistently with what has been reported in the previous 
research. Hypotheses HI, H2, and H3 were used to compare these relationships. 

Reliability Assessment 

Internal consistency measures reliability for a single time period and essentially 
measures item homogeneity and quality (Crocker and Algina, 1986; DeVellis, 1991). 
Internal consistency was measured using coefficient alpha (i.e., Cronbach's alpha). 
Coefficient alpha describes the proportion of the scale score variance attributable to the 
true score, and is affected by many item quality problems (e.g., non-central mean, poor 
variability, negative or weak inter-item correlation, and poor item-scale correlation) 
(DeVellis, 1991). As an absolute measure, an alpha ranging from 0.60 to 0.70 was 
considered acceptable, 0.70 to 0.80 was considered good, and an alpha ranging from 0.80 
to 0.90 was considered very good (DeVelHs, 1991). 

Assessment of Non-Response Bias 

The potential for non-response bias was evaluated in two ways. First, the reasons 
that subjects gave for not responding to the survey when asked during the reminder 
telephone call were collated and summarized. Second, one-way ANOVA procedures 
were conducted for subscriber status, profession, and response interval to check for 
significant differences in PSQ and OSQ among the respective groups. These dependent 
variables were chosen because of their importance in the hypothesis tests. 

Each questionnaire response was received within one of three response intervals: 
(1) response was received before the reminder post card was mailed, (2) response was 
received after the reminder post card, but before the follow up phone call, and (3) 



60 



response was received after the follow up phone call. The rationale for analyzing using 
the response interval to assess non-response bias is based on the assumption that late 
responders were more likely to be hke non-responders in their responses to the 
questionnaire than subjects who returned the questionnaire without needing a reminder. If 
significant differences existed between the early and late responders, it was likely that 
some degree of non-response bias also existed. 

Pre-test of Questionnaire 

The pre-test for this study was conducted at the Arkansas Poison and Drug 
Information Center (APDIC) in Little Rock, Arkansas. The APDIC takes drug 
information and poison intervention calls from practitioners and lay consumers from the 
state of Arkansas, and has contractual relationships with other organizations and 
corporations requiring drug information services. The APDIC is staffed primarily by 
registered pharmacists; however, Pharm.D. interns do occasionally provide additional 
assistance. 

Only health care practitioners calling in with drug information questions (i.e., non- 
poison related) were identified as potential participants. During the study period, 244 
callers fit these criteria; however, 41 (16.8%) of these callers were identified as repeat 
callers, and as such were only sent one questionnaire. Contact information was 
unavailable for two of the callers. This resulted in a usable pre-test sample of 201 subjects. 

Questionnaires were mailed to these 201 subjects on June 5, 1997 and June 6, 
1997. A cover letter (Appendix A) from the director of the APDIC was printed on 
letterhead and sent along with one of two versions of the pre-test questionnaire 
(Appendices B and C). One of the versions was randomly assigned to each subject. 
Approximately three weeks after the initial mailing, a postcard was sent to non-responders 
urging them to mail in the questionnaire (Appendix D). This postcard also thanked 



61 



respondents if they had already sent in the questionnaire, and asked those who had not 
received the questionnaire to contact the researcher (see Appendix C). In total, 134 
subjects responded in time to be included in the analysis. Three envelopes were returned 
as undeliverable, two subjects called to say that they had received the postcard but not the 
questionnaire, and one subject returned the survey completely unanswered. Two subjects 
returned completed surveys too late to be included in the analysis. This equaled a raw 
response rate of 66.7% (134 divided by 201) and an adjusted response rate of 69.1% (134 
divided by 194). 

Of these 134 respondents, 119 (88.8%) were pharmacists, 2 (1.5%) were 
physicians, 3 (2.2 %) and 7 (5.2%)) were categorized as "Other". One of surveys sent to a 
physician was returned as undeliverable, and there was no response received from two 
nurses who were mailed a questionnaire. 

For analysis purposes, negatively worded questions (pre-test question numbers 4, 
11, 12, 14, 15, 16, 18, 22, 23, 24, 26, and 27) were reverse scored. Furthermore, the first 
version of the questionnaire included a "Don't Know" checkbox, while the second version 
did not include this as an option. By comparing the patterns of response for the two 
versions, it was possible to gain some insight regarding how subjects responded to items 
that they did not know how to assess or that did not apply to them. In addition, these 
comparisons made it easier to separate items that were not applicable to the setting from 
those items about which respondents actually had neutral feelings. 

Of the 100 questionnaires sent out without a "Don't Know" response category 
(i.e., version one), 64 (64.0%) were returned. Of the 101 questionnaires sent out 
including a "Don't Know" response category (i.e., version two), 70 (69.3%) were 
returned. The response patterns for the two versions indicated that items 13 ("The 
APDIC keeps it records accurately") and 21 ("Employees get adequate support from the 
APDIC to do their jobs well") were diflficuh to answer. Table 4-2 below shows the 
responses for questions 13 and 21 based on the version. Question 13 on version 2 of the 



62 



questionnaire drew 34 (34.2%) "Don't Know" responses, and question 21 drew 20 
(28.6%) "Don't Know" responses. When comparing versions 1 and 2 for both of these 
items, a clear shift is evident from the "Neutral" and no response categories to the "Don't 
Know" category. This suggests that when subjects were not able to put "Don't Know" as 
a response, they often responded with a neutral response instead. These data also 
suggests that these items were not applicable to the drug information setting. Appendices 
E and F illustrate the responses for all items. 

This lack of applicability is fiirther supported by the comments written next to the 
items (Appendix G). For question 13, six of the respondents wrote in "don't know" next 
to the question and another wrote in that they did not understand the statement. Similarly, 
for question 21, six respondents indicated that they did not exactly know how to answer 
the question, one respondent stated that the question did not apply, and another stated that 
they did not understand the question. As one respondent wrote, "[t]he only hint I have is 
how well the employees answer my questions," suggesting that subjects find these items 
difficult to evaluate because they have no direct experience with these issues. Because of 
this data, it was decided to exclude items 13 and 21 from the rest of the analysis and 
eliminate them from the final version of the questionnaire. 



Table 4-2. 


Responses to Pre-test Questions 13 and 21 






Question 13 


Question 13 


Question 21 


Question 21 


Response 


Version 1 


Version 2 


Version 1 


Version 2 


Category 


Freq. (%) 


Freq. (%) 


Freq. (%) 


Freq. (%) 


Strongly Agree 


11(16.9) 


9(12.9) 


16(25.0) 


16 (22.9) 


Agree 


16 (24.6) 


14 (20) 


24 (37.5) 


26(37.1) 


Somewhat Agree 


(0.0) 


2 (2.7) 


3 (4.7) 


3 (4.3) 


Neutral 


27(41.5) 


9(12.9) 


17(26.6) 


3 (4.3) 


Somewhat Disagree 


2 (0.03) 


(0.0) 


(0.0) 


(0.0) 


Disagree 


(0.0) 


(0.0) 


(0.0) 


(0.0) 


Strongly Disagree 


(0.0) 


1(1.5) 


(0.0) 


1(1.4) 


"Don't Know" 


n/a 


34 (34.2) 


ii/a 


20 (28.6) 


No Response 


9(13.9) 


1(1.5) 


4(6.3) 


1 (1.4) 



63 

Reliabilities of Pre-test Measures 

The service quality questionnaire used in this study contained four measures: (1) 
the SERVPERF scale measuring perceived service quality (i.e., items 3 through 24), (2) 
four items measuring service time perceptions (i.e., items 25 through 28), and (3) overall 
service quality, and (4) intended future behavior (items 34 and 35). Chronbach's alpha 
was used to assess the reliabilities of these measures. Perceived service quality as 
measured by SERVPERF had an alpha of 0.8873 (n=93). The alpha for the items 
evaluating service time perceptions v^as 0.6560 (n=13 1). The alpha for intended fiiture 
behavior was 0.6292 (n=126). Since OSQ was a single item measure, reliability could not 
be assessed; however, the reliabilities for the other measures were in an acceptable range. 
Tables 4-3 and 4-4 show the item-to-total statistics for perceived service quality and 
perceived service time measures. Only question 4 had a corrected item-to-total 
correlation of less than 0.30; however, since the deletion of question 4 resuhed in only a 
small improvement in alpha (from 0.8873 to 0.8886) the item was retained. 
Factor Analvsis of SERVPERF Scale 
I A principal components factors analysis using a Varimax rotation was used to 

explore the factor structure of the modified SERVPERF scale in this setting. The analysis 
revealed six factors with eigenvalues over 1.00 explaining approximately 71.4% of the 
variance. The eigenvalue for the seventh factor was 0.872; therefore, it was unlikely that 
seven factors would have produced a better separation of the variables. The Scree Plot 
presented in Figure 4-1 shows the eigenvalues for each component, where components 
numbered seven and below have eigenvalues of less than one. As expected from the 
reliability testing, the communalities of each of the variables (Table 4-5) were all 
satisfactory, with only three of variables having communalities below 0.6 (i.e., Q-5, Q-22, 
and Q-23). 

Table 4-6 below shows the rotated component matrix for the six factor solution. 
Unfortunately, replication of the factors demonstrated by Parasuraman et al (1988) was 



1 1 



64 



not achieved. However, at least one item from each of the hypothesized dimensions (i.e., 
tangibles, reliability, responsiveness, assurance, and empathy) defined each of the 
respective factors. The differences in the factor structure may have resuhed from a 
number of sources. First, two of the items were reworded to improve clarity. Second, 
eight of the items were reworded from second person perspective to first person 
perspective. Third, the order of some of the items was rearranged to reduce the sense of 
redundancy. Fourth, the items related to the tangibles dimension were replaced by new 
items, so it was not determined where these new items would load, or how their 
correlations with the other items would aflfect the factor structure. 



Table 4-3. Pre-test Item-total Statistics for SERVPERF Subscale (n=93) ^ 





Corrected Item- 


Alpha if Item 


Item 


Total Correlation 


Deleted 


Q2 


0.3555 


0.8865 


Q4R 


0.2762 


0.8886 


05 


0.3189 


0,8878 


06 


0.6317 


0.8806 


Q7 


0.6494 


0.8791 


08 


0,6609 


0.8764 


09 


0.6800 


0.8790 


QIO 


0.6249 


0.8795 


QJIR 


0.6019 


0.8788 


012R 


0.5296 


0,8832 


Q14R 


0.6328 


0.8786 


Q15R 


0.6123 


0.8802 


016R 


0,6572 


0.8767 


Q17 


0.3328 


0.8910 


Q18R 


0.7585 


0.8764 


O20 


0.6836 


0.8790 


022R 


0.5498 


0.8807 


023R 


0.3354 


0.8875 


Q24R 


0.3238 


0.8890 



"R " next to the question number indicates that the question was reverse coded for analysis purposes. 



65 



Table 4-4. Pr e-test Item-total Statistics for Perceived Service Time (n=131) ^ 





Corrected Item- 


Alpha if Item 


Item 


Total Correlation 


Deleted 


025 


0.5235 


0.5545 


026R 


0.5074 


0.5803 


Q27R 


0.6125 


0.5389 


028 


0.3462 


0.6465 



"R " next, to the question number indicates that the question was reverse coded for analysis purposes. 



Table 4-5. Pre-test Item Communalities^ 


Item 


Communality 


03-Necessary Resources 


0.843 


04R-Background Noise 


0.849 


Q5 -Written Materials 


0.555 


06-Speak Clearly 


0.652 


Q7 -Promised Time 


0.831 


08-Sympathetic & Reassuring 


0.611 


Q9-Dependable 


0.719 


QIO- Provides in Time 


0.884 


01 1 R-Individual Attention 


0.698 


QI2R-When Performed 


0.687 


Q14R-Prompt Service 


0.792 


01 5R-Wil.lingness to Help 


0.662 


016R-TooBusy 


0.827 


01 7-Trust Employees 


0.728 


Ql 8R-Personal A ttention 


0.747 


Q19-Polite Employees 


0.772 


O20-Safe Interactions 


0.738 


022R-Know Needs 


0.518 


Q23R-Best Interests 


0.469 


Q24R-Operaling Hours 


0.691 



"R" next to the question number indicates that the question was reverse coded for analysis purposes. 



66 



Table 4-6. Rotated Component Matrix of Pre-test Responses (n=93) ^* 


Item 1 


2 3 4 5 


6 


QlO-Provides in Time 
07-Promised Time 
Q9-Dependable 
0I4R-Prompt Service 


0.895 (RE) 
0.847 (RE) 
0.715 (RE) 
0.660 (RS) 


0.380 


0.391 




Q16R-TooBusy 
Q15R-Willingness to Help 
Q18R-Personal Attention 
Q19-Polite Employees 
Q20-Safe Interactions 


0.398 


0.853 (RS) 
0.749 (RS) 
0.667 (EM) 
0.600 (AS) 
0.537 (AS) 




0.359 

0.501 
0.505 


Q5-Wrilten Materials 

01 IR-Individual 

Attention 

012R-When Performed 

024R-Operating Hours 

Q8-Sympathetic & 0.358 

Reassuring 

Q22R-Know Needs 

Ql 7-Trust Employees 

Q23R-Best Interests 

Q6-Speak Clearly 0.452 


0.432 


0.719 (TA) 
0.648 (EM) 

0.633 (RS) 






0.382 




0.814 (EM) 
0.515 (TA) 

0.514 (liM) 




0.432 
0.356 




0.717 (AS) 
0.536 (EM) 
0.455 (TA) 


Q4R-Background Noise 
03-Necessary Resources 












0.896 (TA) 
0.858 (TA) 



"R " next to the question number indicates that the question was reverse coded for analysis purposes. 
* The letters in parentheses indicate the dimension on which the item loaded in the original 
SERVQUAL research conducted by Parasuraman et al. (J 988). Where EM=Empathy, 
RS=Responsiveness, RE^Reliability, AS=Assurance, and TA=Tangibles. 




9 10 11 12 13 14 15 16 17 18 19 20 



Component Number 

Figure 4-1. Scree Plot of Pre-test Data 
(Variables = 20; N=93) 



67 



Simulation Development. Verification, and Validation 

Simulation was chosen as the appropriate modeling tool to answer the three 
specific research questions posed by this study for three reasons. First, simulations make 
it easier to clarify thinking about systems problems, because the focus is placed on 
defining the components of the system rather than the complex interrelationships. Second, 
simulation enables the consideration of the impact of changes in all the factors influencing 
the system, such as staffing levels, service rates, and arrival rates. Third, since these 
changes are made on the computer, effects of systems changes can be evaluated and tested 
without subjecting the real system to unnecessary stress (Reilly et al., 1978). This section 
describes the techniques used to construct, verify, and validate the simulation model. 

Model Construction 

k 
The simulation model was constructed using the GPSS/H (General Purpose | 

Simulation System) simulation language on an MS-DOS based personal computer. Four I 

I' 
steps were necessary to the construction of the simulation. First, approximately 12 hours : 

were spent by the principal investigator observing the system in order to gain a first- f 

person perspective of the actual work processes in the DIPRC. Different days and times 

were selected so that a broad perspective was achieved. Second, flow charts were 

developed documenting the steps necessary to complete a service transaction. From these 

flow charts, it was be possible to identify the necessary events, facilities, variables, 

decisions, inputs, and outputs necessary to model the system (Hoover and Perry, 1989). 

Third, these flow charts were translated into block diagrams that directly represented 

program code (Appendix G). Fourth, after the historical data sheets, the data collection 

forms, and the personal interviews were analyzed, the simulation code was written using 

the data derived from these sources as inputs (Appendix H). 



68 
Verification of the Model 

Once the computer model was constructed, it underwent a verification process. 
Verification is the process by which the model is tested to make sure that it is performing 
as it should. In other words, the verification process analyzes whether the program code 
"correctly represents the model assumptions and system data" (Carson 1989, p. 552). 
Three techniques were used to verify the model. First, tracing the simulation is a 
common process used to make sure that entities move through the simulation as expected 
(Hoover and Perry, 1989). The GPSS/H tool called the "interactive debugger", which 
allows for interactive traces of the simulation, was used for this step of the verification 
process (Schriber, 1991). Second, logical relationships imposed in the model were 
verified. This confirmed that facilities and queues were not exceeding their capacity 
(Hoover and Perry, 1989). Third, simulation results were compared to those expected by 
an analytical model. This was accomplished by changing the parameters of the model (e.g. 
changing empirical distribution to exponential distributions and using constants instead of 
random variables) so that the simulation results could be compared to the results 
mathematically derived from known equations (Hoover and Perry, 1989). 

Model VaHdation 

Throughout the process of the building the simulation, the simulation model 
underwent a series of validation steps. Validation is the process by which the researcher 
determines if the model is "sufficiently accurate for the purpose at hand and which can be 
used as a substitute for the real system" (Carson, 1989, p. 552). In other words, a vaHd 
model can be used in place of the real system for purposes of asking questions and making 
comparisons (Carson, 1989; Hoover and Perry, 1989). Three techniques were used to 
validate the model: (1) face vahdation, (2) extreme-conditions tests, and (3) comparison of 
simulation output to data from the real system (Balci, 1989; Hoover and Perry, 1989). 



69 



First, face validation was necessary to judge whether or not the model seemed 
reasonable to those knowledgeable about the system being studied (Balci, 1989; Carson, 
1989; Hoover and Perry, 1989). In two separate meetings, the co-directors of the DIPRC 
were given a structured walk-through of the simulation using the block-diagrams. The co- 
directors were both asked four general questions: (1) "Do you understand how the model 
will operate?", (2) "Does the model conform to your knowledge of the service processes 
of the DIPRC?", (3) "Is there anything present in the model that seems incorrect?", and 
(4) "Is there some part of the service process that is missing from the model?" Changes to 
the simulation were made based on the co-directors responses to these questions. 

Second, the computer model was validated under extreme conditions. To some 
extent, the behavior of a system under extreme conditions should be plausible and conform 
in the expected direction (Hoover and Perry, 1989; Sargent, 1992). For example, if the 
arrival rate increases dramatically, we would expect that that line length, time in the 
queue, and utilization percentage should also increase dramatically. 

Third, one of the most powerful techniques for validation is the comparison of the 
model to the original system. Chi-square goodness-of-fit tests will be used to answer 
questions concerning the equality of the underlying distributions. It was recognized a 
priori, however, that the simulation results would probably not exactly match those of the 
real system. This is because the model is a simplified version of the real system and did 
not reflect many elements intentionally excluded from the model (Balci, 1989, Hoover and 
Perry, 1989; Sargent, 1992). Therefore, regression analysis was also conducted to detect 
how much variation in the real system the simulation actually predicts. 

Variance Reduction 

Variance reduction in simulation models is important because it helps improve the 
power for detecting significant statistical differences in the simulation experiments. 



70 



Variance reduction techiuques allow simulation experiments to obtain greater precision 
(e.g. smaller confidence intervals) with less simulation (Law and Kelton, 1991). One 
method of variance reduction commonly used is antithetic variation, which uses strong 
negative covariances to reduce the variation among experimental runs (Law and Kelton 
1991; Neelamkavil 1987). Simulation runs were constructed used the antithetic variate 
capabilities available in GPSS/H through the RMULT statements. 

Data Analysis 

To facilitate the reporting of the results, the data analysis was broken down into 
four separate parts. The first two parts of this study were essential to develop an 
understanding of the work processes in the DIPRC necessary to construct the simulation. 
The results of these first two parts are reported in chapter five as preliminary data. The 

third and fourth parts tested the research hypotheses and specific research questions | 

I 
developed in previous chapters. The results of these analyses are presented in chapter six | 

I 
as main study results. f 

Analysis of Preliminary Data |; 

Three data sources were used to evaluate the calling population characteristics as 
well as the arrival and service rate trends in the DIPRC: (1) archived historical data sheets 
from September, 1996 through May, 1997 (Appendix H), (2) data collection forms 
collected concurrently during the data collection period from June 1, 1997 until August 1, 
1997 (Appendix I), and (3) an internal database containing the number of questions 
answered by month for the past ten years. Three analyses were conduced using this data. 
First, the consistency between the historical data and the concurrent data was evaluated. 
The two samples were compared by profession, subscription status, question type, 
response type requested, and percentage of service delays. Proportional differences 



71 



between the historical and concurrent groups were detected using a two-tailed z-test with 
a = 0.05. Second, temporal trends were examined by month of year, day of week, and 
time of day. One-way analysis of variance (ANOVA) was used to detect overall 
differences in means, and Scheffe' multiple comparisons procedures were used to analyze 
for significant mean differences between groups. Alpha for the ANOVA and post-hoc 
procedures was set at the 0.05 level. Third, the empirically observed interarrival and 
service time distributions were compared with distributions known to be useful in 
modeling system behavior (i.e., exponential and Weibull). This comparison was 
completed using Kolmogorov-Smirnov goodness-of-fit tests and regression analysis. 

The second part describes the results from personal interviews conducted with the 
students and co-directors working in the Drug Information and Pharmacy Resource 
Center (DIPRC). These data were used primarily to obtain information concerning the 
work process and the prioritization system used to organize the work. In addition, 
perceptions about caller preferences and recommendations for improvement were also 
obtained. The data from each of the twelve audio-recorded personal interviews were 
collated and content analyzed based on the responses to the semi-structured interview 
outline. 

Analysis of Data Related to Main Study 

The third part reports the results of the validation and reliability testing of the 
service quality questionnaire and the hypothesis tests used to explore the relationships 
among service time, service delays, perceived service time, evaluations of perceived 
service quality, and behavioral intention in the drug information service setting. Chapter 
three presented eight hypotheses related to these variables. Hypothesis 5a (H5a) was 
tested using simple linear regression, and hypothesis 5b (H5b) was evaluated through an 
examination of residual plots obtained from the regression analysis. The remaining 



72 



hypotheses were tested using correlation, t-tests, and one-way analysis of variance 
(ANOVA). Pearson correlation coefficients were used to detect significant relationships 
between variables. Hypotheses that did not demonstrate statistically significant 
correlations between the study variables were rejected. Additional analysis of using 
ANOVA was conducted for hypotheses with statistically significant correlations. Scheffe' 
multiple comparison procedures were conducted when an ANOVA resulted in a 
significant F-value indicating overall differences among level means. All analyses were 
conducted at the 0.05 level. 

The fourth part used information from the first three parts to develop and validate 
the simulation model used to optimize service capacity in the DIPRC based on total 
service time, percentage of service delays and percent utilization. VaUdation and 
verification of the simulation was conducted using regression, x^, linear regression, and 
comparisons of relevant 95% percent confidence intervals. Descriptive statistics, 
confidence intervals, and ratio analysis was used to explore the three research questions 
outlined in chapter three. 



CHAPTER 5 
PRELIMINARY RESULTS 

Overview 

As discussed in the previous chapter, the data analysis was broken into four related 
parts. This chapter presents the preliminary results generated from parts one and two of 
the data. Part one has three subsections. First, the data gathered from the historical data 
sheets were compared with the concurrent data gathered from the data collection forms. 
This was done to establish the degree of homogeneity between the historical and 
concurrent samples and to explore potential sources of bias in the results. The two 
samples were compared by profession, subscription status, question type, response type, 
and the occurrence of delays in service. Second, the historical data were analyzed for 
temporal trends by month, day of week, and time of day. This was done to explore for 
trends in arrivals that could be reasonably reproduced by the simulation program, and to 
identify periods for which the simulation results may not be valid. Third, the historical 
interarrival and service time distributions were tested for goodness-of-fit with known 
probability distributions (i.e., exponential and Weibull) to determine if the simulation could 
use approximated known distributions to model empirically observed interarrivals and 
service times. 

Part two has two subsections. First, the interviews conducted with the co- 
directors of the DIPRC are summarized. Second, a summary of the interviews conducted 
with the externship students is presented. 



73 



74 
Part One: Historical and Concurrent Data 

Analysis by Profession 

Four types of profession categories were tracked on the historical and concurrent 
data sheets: (1) pharmacist/Pharm.D,, (2) physician, (3) nurse/nurse practitioner, and (4) a 
miscellaneous category called "other" (e.g., physician assistants, dentists, nutritionists, law 
enforcement, etc.) Table 5-1 presents the frequency of occurrence of the four profession 
categories along with the individual and cumulative percentages that the frequencies 
represented out of the total number of usable samples. The number of usable data points 
out of the total number of samples is displayed near the bottom of the table along with the 
number of missing data points. 

Pharmacists represented the largest professional group that the DIPRC serviced. 
Pharmacists posed 66.8% of the questions in the historical group and 67.8% in the 
concurrent group. Although the proportions of pharmacists and nurses were not 
significantly different between the historical and concurrent samples, it was determined 
that the historical data had a larger sample of physicians than the concurrent sample (i.e., 
11. 4% historically versus 8.2% in the concurrent sample (p=0.036)) and a smaller number 
of other professional types (i.e., 13.9% historically versus 17.1%) in the concurrent sample 
(p=0.068)). However, the ANOVA did not reveal any significant differences in total 
service times related to the profession of the caller (F=2.23 1, p=0.084), reinforcing the 
homogeneity of the two samples. 

Analysis by Subscription Status 

Subscription status is typically broken into three groups by the DIPRC: (1) non- 
subscribers, (2) subscribers (i.e., those who have donated a subscription fee to the 
DIPRC), and (3) University of Florida Health System (UFHS) employees. Analysis using 



75 



z-tests revealed differences between the historical and concurrent samples with regard to 
their subscriber and UFHS percentages (p<0.001 in both cases); however, no differences 
were detected for non-subscribers (Table 5-2). Historically, about 39.5% of the callers 
were subscribers, 30.0% were non-subscribers, and 30.5 percent were employees of the 
UFHS. In the concurrent sample, 26.6%) were subscribers, 32.5%) were non-subscribers, 
and 40.9 percent were employees of the UFHS. Since the ANOVA did not reveal any 
overall differences in total service times among the three subscription status groups 
(F=2.015, p=0. 135), there was not much concern that differences in subscription status 
between the historical and concurrent samples would bias the simulation resuhs. 

Table 5-1. Percentage of Questions by Profession^ 





Historical Data 


Concurrent Data 








Cumul. 






Cumul. 


Profession 


Freq. 


Percent 


Percent 


Freq. 


Percent 


Percent 


Pharmacist 


1554 


66.8 


66.8 


354 


67.8 


67.8 


Physician'^ 


265 


11.4 


78.2 


43 


8.2 


76.0 


Nurse/N.P. 


182 


7.9 


86.1 


36 


6.9 


82.9 


Other 


324 


13.9 


100.0 


89 


17.1 


100.0 


Total Questions 














Counted 


2325 


97.5 




522 


99.2 




Missing Data 


60 


2.5 




4 


0.8 




Total 


2385 


100.0 




526 


100.0 





Significance denotes differences between the historical and concurrent groups. 
*p<.05 **p<.01 ***p<.001 



Table 5-2. Percentage of Questions by Subscription Status 





Historical Data 


Concurrent Data 








Cumul. 






Cumul. 


Subscription Status 


Freq. 


Percent 


Percent 


Freq. 


Percent 


Percent 


Subscriber*** 


932 


39.5 


39.5 


136 


26.6 


26.6 


Non-Subscriber 


708 


30.0 


69.5 


166 


32.5 


59.1 


UF Health System * * * 


719 


30.5 


100.0 


209 


40.9 


100.0 


Total Questions 














Counted 


2359 


98.9 




511 


97.2 




Missing Data 


26 


1.1 




15 


2.8 




Total 


2385 


100.0 




526 


100.0 





Significance denotes differences between the historical and concurrent groups. 
*p<.05 **p<.01 ***p<.001 



76 
Analysis by Question Type 

As discussed in the previous chapter, the DIPRC answers a variety of drug related 
questions covering fourteen separate categories. Of these fourteen categories, the five 
largest in both samples were (1) drug availability, (2) drug dosage and administration, (3) 
drug identification, (4) drug therapy and efficacy, and (5) a miscellaneous category called 
"Other". These five categories made up 74.8% and 71.8% of the historical and concurrent 
questions answered, respectively (Table 5-3). 

Only three question categories demonstrated statistically significant proportional 
differences between the historical and concurrent samples. First, questions concerning 
drug therapy and efficacy represented about 19.5% in the historical sample; however, they 
only represented 13.1% of the total questions in the concurrent sample (p<0.001). This 
was the only question out of the top five to display a significant difference. Second, 
questions involving drug use in pregnancy and lactation represented only 2.1%) of the 
historical sample; however, it represented 7.8% of the concurrent sample (p<0.001). 
Finally, questions concerning side effects or adverse drug effects represented 9.1% of the 
historical sample, but only 5.1% of the concurrent sample (p=0.005). 

Although the proportional differences in question types between the historical and 
concurrent samples was not great, it was necessary to account for differences in service 
times among question types. For analysis purposes, question types representing less than 
five percent of the sample in both the historical and concurrent samples were grouped into 
one category called "combined". As such, six question types were classified as 
"combined": investigational drugs, IV compatibility and stability, legal, pharmacokinetics, 
toxicology, and veterinary drugs. Although questions falling into the drug use in 
pregnancy and lactation category only represented 2. 1% of the questions in the historical 
sample, this category was kept separate since it represented 7.8% of the questions in the 
concurrent sample. 



77 



The ANOVA comparing the service times (i.e., time to complete a question) in 
minutes for each of the nine question groups revealed that the mean service time for at 
least one question type was significantly different (F=5. 127, p<0.001). Post hoc 
calculations revealed that the drug identification questions were significantly different from 
questions regarding drug therapy and efficacy (p=0.001) and questions regarding drug use 
in pregnancy and lactation (p=0.028). No other significant differences were detected. 
Table 5-4 presents the descriptive statistics for each of these categories and Figure 5-1 
graphically illustrates the 95% confidence intervals for service time for each of the 
question categories. In addition to the two significant differences in service times above, 
the confidence intervals also suggested that questions regarding side effects and adverse 
drug effects tended to have longer service times than drug identification questions. Also, 
it appeared that questions from the "Other" category also tended to have shorter service 
times with the exception of drug identification questions. 



100 




S 



^fA '^rP '^s), '>n ^ 



^ V \ \ % \ 

\ ''- 









■/ 



'a 



% 



Question Type (Combined Categories 7,8,9,11,13,14) 



Figure 5-1. 95% Confidence Intervals of Service Time 
by Question Type (in Minutes) 



78 



Table 5-3. Percentage of Questions by Type^ 








Historical Data 


Concurrent Data 






Cumul. 






Cumul. 


Question Type 


Freq. 


Percent 


Percent 


Freq. 


Percent 


Percent 


Drug Availability 


236 


10.8 


10.8 


70 


13.3 


13.3 


Drug Dosage & Admin. 


271 


12.3 


23.1 


66 


12.5 


25.8 


Drug Identification 


432 


19.7 


42.8 


100 


19.0 


44.8 


Drug Interactions 


151 


6.9 


49.7 


36 


6.8 


51.6 


Drug Ther. & Efficacy * * * 


429 


19.5 


69.2 


69 


13.1 


64.7 


Drug Use in Pregnancy * * * 


46 


2.1 


71.3 


41 


7.8 


72.6 


Investigational Drugs 


4 


0.2 


71.5 


3 


0.6 


73.2 


IV Compat. or Stability 


51 


2.3 


73.8 


9 


1.7 


74.9 


Legal 


33 


1.5 


75.3 


12 


2.3 


77.2 


Other 


274 


12.5 


87.8 


73 


13,9 


91.1 


Pharmacokine tics 


49 


2.2 


90.0 


17 


3.2 


94.3 


Side Effects or ADEs*"" 


199 


9.1 


99.1 


28 


5.3 


99.6 


Toxicology 


18 


0.8 


100.0 


1 


0.2 


99.8 


Veterinary Drugs 


1 


<0,1 


100.0 


1 


0.2 


100.0 


Total Questions Counted 


2194 


92.0 




526 


100.0 




Missing Data 


191 


8.0 







0.0 




Total 


2385 


100.0 




526 


100.0 





Significance denotes differences between the historical and concurrent groups. 



*p<.05 



*p<.01 



***p<.001 



Table 5-4. 


Service Times in 


Minutes by Question Type 












95% 




95% 










Confidence 


Confidence 


Question Type 


Median 


Mean 


StDev. 


Lower Bound 


Upper 


Bound 


Drug A vai lability 


21,00 


44.20 


56.42 


30.22 




58.18 


Drug Dosage & Admin. 


28.50 


47,50 


54.51 


33.42 




61.58 


Drug Identification 


15.00 


23.18 


27.76 


17.65 




28.72 


Drug Interactions 


32.50 


51.61 


45.85 


36,10 




67.12 


Drug Ther. & Efficacy 


55.00 


65.48 


50.98 


53.04 




77.91 


Drug Use in Pregnancy 


34.00 


63.05 


59.28 


40.59 




85.51 


Other 


23.00 


38.00 


44.99 


26.85 




49.15 


Side Effects or ADEs 


50.00 


63.41 


61.31 


39.15 




87.66 


Combined 


36.00 


53.80 


59.19 


34.87 




72.73 


Overall 


30.00 


46.30 


52.13 


41.71 




50.89 



79 



Based on the service time results presented above, question categories were 
further collapsed into three basic groups for purposes of the simulation. Group one had 
the shortest service time profile and represented the drug identification and "other" 
categories equahng approximately 32.2% of the questions, based on the historical data. 
Group two had a moderate service time profile and represented questions related to drug 
availability, drug dosage and administration, drug interactions, and the "combined" 
categories equahng approximately 37. 1% of the questions. Group three had the longest 
service time profile and represented the drug therapy and efficacy, drug use in pregnancy 
and lactation, and side effects and adverse drug effects question categories equaling 
approximately 30.7% of the questions. 

The ANOVA comparing the mean service times of the three combined question 
types was significant (F=18.30, p<0.001) indicating overall differences in means among 
the groups. Post hoc analysis indicated that the mean service times for all three combined 
question types were significantly different from one another. Group one had a mean 
service time of 29.05 minutes (s=26.2I minutes), which was significantly different from 
both groups two and three (p<0.001) which had respective mean service times of 48.42 
minutes (s=54.41 minutes) and 64.35 minutes (s=58.51 minutes). Also, the mean service 
time for group two was significantly different from the mean service time for group three 
(p=0.01 9). Table 5-5 shows the descriptive statistics for the three groups, and Figure 5-2 
illustrates the new confidence intervals obtained from grouping the questions in this 
manner. 

Table 5-5. Service Times in Minutes 
Using Three Combined Questions Types (in Minutes) 



95% 95%. 

Confidence Confidence 
Question Type Median Mean St.Dev. Lower Bound Upper Bound 



Group One 15.50 29.05 36.21 23.47 34.64 

Group Two 30.00 48.42 54.41 40.86 55.99 

Group Three 30.00 64.35 58.51 54.31 74.38 



80 




"^ on 
CD 20 



Three Question Type Categories 

Figure 5-2. 95% Confidence Intervals of Service Time 
Using Three Combined Question Types (in Minutes) 



Analysis by Response Type Requested 

When students accepted calls from practitioners, the caller was asked how they 
would like to receive the response to their question. Four response types were available: 
(1) oral; (2) written; (3) both (i.e., oral and written); and (4) either (i.e., oral or written). 
The most common type of response requested by the callers was an oral response, 
representing 59.0% in the historical sample and 59.2% in the concurrent sample. The 
second most requested response type was written, representing 16.9% in the historical 
sample and 2] .2% in the concurrent sample (Table 5-6). The z-tests revealed significant 
differences regarding the proportion of responses from the "both" and "either" response 
types observed in the historical and concurrent groups (p<0.001). However, since a 
significant dfference in total service time was not detected between these two groups 
(p=0.994), then little bias is likely to be introduced. 



81 



The ANOVA revealed significant service time differences (F=l 1.228, p<0.001) 
among the response types. The post hoc procedures indicated differences between the 
oral and both response types (p=0.044) and between the oral and either response types 
(p<0.001). The 95% confidence intervals for the service times by response type are 
presented in Table 5-7, and illustrated in Figure 5-3. These data suggested that responses 
involving a written component tended to take longer than responses requiring just an oral 
response. These differences may be partially explained by examining the distribution of 
question types by response type requested. Recalling that drug identification questions 
tended to have the lowest service times of the eight question categories analyzed, it is 
evident from Table 5-8 that this category also had the highest percentage of oral responses 
(83.7%). Furthermore, questions regarding drug therapy, drug use in pregnancy, and side 
effects tended to have the highest service times and the lowest percentages of oral 
responses of 37.9%, 43.6%, and 48.1%, respectively. Since Figure 5-3 shows that oral 
responses tend to have the lowest service times, it is understandable that drug 
identification questions would tend to have lower service times than the other question 
categories. Therefore, for simulation purposes, it was assumed that deviations in service 
times as a function of response type are explained by the type of question asked. 



Table 5-6. Percentage of Questions by Response Type Requested 





Historical Data 


Concurrent Data 










Cumul. 






Cumul. 


Response Type 


Freq. 


Percent 


Percent 


Freq. 


Percent 


Percent 




Oral 


1273 


59.0 


59.0 


299 


59.2 


59.2 


Written 


364 


16.9 


75.9 


107 


21.2 


80.4 




Both*** 


320 


14.8 


90.7 


25 


4.9 


85.3 




Either*** 


201 


9.3 


100.0 


74 


14.7 


100.0 




Total Questions 
















Counted 


2343 


98.2 




499 


94.9 






Missing Data 


42 


1.8 




27 


5.1 






Total 


2385 


100.0 




526 


100.0 







Significance denotes differences between the historical and concurrent groups. 
*p<.05 **p<.01 ***p<.001 



82 



Table 5-7. 


Service Time 


in Minutes by Response Type Requested 












95% 




95% 










Confidence 


Confidence 


Response Type 


Median 


Mean 


St.Dev. 


Lower Bound 


Upper 


Bound 


Oral 


24.50 


35.77 


42.44 


31.78 




41.76 


Written 


40.00 


52.41 


48.61 


42.91 




61.91 


Both 


36.00 


68.43 


76.32 


35.43 




101.44 


Either 


40.00 


71.89 


71.79 


55.14 




88.64 


Overall 


30.00 


46.30 


52.13 


41.71 




50.89 



Table 5-8. Frequency 


and Percentage 


of Response Types by Question Typ^e 


Question Type 


Oral 


Written 


Both 


Either 


Drug Avail abiUty 


42(61.8%) 


10(14.7%) 


3 (4.4%) 


13(19.1%) 


Drug Dosage & Admin. 


37 (57.8%) 


15 (23.4%) 


3 (4.7%) 


9(14.1%) 


Drug Identification 


82(83.7%) 


3(3.1%) 


2 (2.0%) 


11 (11.2%) 


Drug Interactions 


22 (64.7%) 


4(11.8%) 


3 (8.8%) 


5 (14.7%) 


Drug Ther. & Efficacy 


25 (37.9%) 


27 (40.9%) 


5 (7.6%) 


9(13.6%) 


Drug Use in Pregnancy 


17(43.6%) 


9(23.1%) 


4(10.3%) 


9(23.1%) 


Other 


40(57.1%) 


23 (32.9%) 


(0.0%) 


7 (10.0%) 


Side Effects or ADEs 


13 (48.1%) 


9 (33.3%) 


2 (7.4%) 


3 (11.1%) 


Combined. 


21 (53.8%) 


7 (17.9%) 


3 (7.7%) 


8 (20.5%) 


Overall 


299 (59.2%) 


107(21.2%) 


25(5.0%) 


74 (14. 7%) 



E 



s 









I^U ■ 










10U- 










80' 




[ 


I 

] 


] 


60' 


I 


1 












4U ■ 


n 


] 












20. 











280 

Oral 



103 

Written 



23 
Botti 



73 

Bther 



Response Type 

Figure 5-3. 95% Confidence Intervals of Service Time 
by Response Type (in Minutes) 



83 

Occurance of Service Delays 

Each of the samples was evaluated to determine if there had been a delay in service, where 
the response time was longer than the time requested. The z-test did not reveal any 
significant differences between the historical and the concurrent data groups in terms of 
the percentage of delays. The historical data indicated that, overall, 16.3% of questions 
resuh in service delays (Table 5-9). The concurrent data was very similar indicating that 
18.6% of the questions evaluated during the study period resulted in service delays. 

In order to evaluate when delays occur, the distribution of service delays was 
examined by the time requested for the four categories available on the data collection 
form (i.e., "Stat", "Today", "Date", and "No Rush"). The results using the concurrent 
data indicated that those questions requesting "Stat" (<15 minutes) attention resulted in 
the highest percentage of delays at 58.6% (Table 5-10). Questions requesting an answer 
to the question within the day (i.e., "Today") were delayed 19.5% of the time, and those 
requesting an answer by a specific date were delayed 23.2%) of the time. "No rush" 
questions were arbitrarily marked as delayed when total service times were greater than 
two weeks (2. 1%). Given that the majority of calls requesting "Stat" attention are 
delayed, it is clear that under the current system in the DIPRC it is very difficult to provide 
a fifteen minute turnaround time. 



Table 5-9. Percentage of Questions by Delay Status^ 






Historical Data Concurrent Data 






Cumul. 


Cumul. 


Delay Status 


Freq. Percent Percent Freq. Percent 


Percent 


No 


1936 83.7 83.7 


428 81.4 


81.4 


Yes 


378 16.3 100 


98 18.6 


100.0 


Total Questions 








Counted 


2314 97.0 


526 100.0 




Missing Data 


71 3.0 


0.0 




Total 


2385 100.0 


526 100.0 





Significance denotes differences between the historical and concurrent groups. 
*p<05 **p<.01 ***p<.00] 



84 



T able 5-10. Percentage of Delays in Service by Time Needed 





Delay 


in Service 


No 


Yes 


Yes 


Time Needed 


Freq. 


Freq. 


Percent 


Stat (<15 mm) 


24 


34 


58.6 


Today 


182 


44 


19.5 


By Spec. Date 


53 


16 


23.2 


No Rush 


143 


3 


2.1 



Arrivals by Month 



A database containing the number of questions per month from January 1987 to 
June 1997 was analyzed to determine if there were any significant monthly trends in the 
data. In order to the reduce the error introduced by the differing number of days the 
DIPRC was available to answer questions each month, the number of questions answered 
during each month was divided by the number of days the center was actually open. This 
resulted in an average number of questions answered per day during a particular month. 
Table 5-1 1 shows the descriptive statistics from January to December based on the 
average number of question arrivals per day. 

The ANOVA revealed that at least one month was significantly different (F=3.432, 
p<0.001) from the other months. Post-hoc calculations using the Schefife' procedure 
revealed that the month of December was significantly different from November 
(p=0.023). No other month-pairs revealed statistically significant differences in average 
arrivals; however, January (p=0.074) and August (p=0. 139) also tended to have higher 
average daily arrivals than December. Figure 5-4 graphically illustrates the 95% 
confidence intervals for the average daily number of arrivals for each of the twelve 
months. 

Overall, the analysis revealed that the average daily number of questions from 
January 1987 to June 1997 was 12.62 (n=126, s=1.38) and average total number of 
questions per month was 263. 13 (n=126, s=30.32). Limiting this to only the past five 
years resulted in only slightly higher numbers. From July 1992 to June 1997 the average 



85 



daily number of questions was 12.75 (n-66, s=1.25) and the average total number of 
questions per month was 265.98 (n=66, s=25.26). In general, however, there are no 
practically significant differences among the month that would necessitate special 
consideration in the simulation program. However, December represents a special case 
for which the simulation program may not be valid. 



Table 5-11. Descriptive Statistics for the Average Number of Questions Answered 
by Month for the Past Ten Years. 



Month 



Median 



Jan. 


13.43 


Feb. 


12.90 


Mar. 


12.39 


Apr. 


12.10 


May. 


12.32 


Jun. 


12.05 


Jul. 


12.69 


Aug. 


13.77 


Sep. 


12.85 


Oct. 


12.67 


Nov. 


13.62 


Dec. 


11.32 


Overall 


12.61 



Mean 



13.50 
13.00 
12.29 
12.27 
12.12 
11.85 
12.74 
13.39 
12.65 
12.63 
13.82 
11.15 
12.62 





95% Lower 


95% Upper 


Dev. 


Bound 


Bound 


1.59 


12.44 


14.57 


1.38 


12.08 


13.93 


1.07 


11.57 


13.01 


0.91 


11.66 


12.89 


1.27 


11.27 


12.98 


1.00 


11.18 


12.52 


1.17 


11.91 


13.59 


1.22 


12.53 


14.26 


1.11 


11.85 


13.45 


0.74 


12.10 


13.16 


1.72 


12.59 


15.05 


1.27 


10.24 


10.24 


1.38 


12.37 


12.86 



Interarrivals by Day of Week 



The interarrival times (i.e., the time between arrivals) were evaluated by day of 
week using the historical data. The ANOVA did not reveal any significant differences in 
interarrivals by day of week (F=0.348; p=0.846); however Monday did have the shortest 
mean interarrival time of 37.08 minutes (s=40.77), and Friday had the longest at 40.28 
(s=39.95) (Table 5-12). However, these differences were not practically significant and 
do not warrant special consideration in the simulation. 



86 



Q 
O 

o 







N= 11 11 11 11 11 11 10 10 10 10 10 10 

JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC 



Month 



Figure 5-4. 95% Confidence Intervals of Average Daily Arrivals by Month 



Table 5-12. Descriptive Statistics for Question Interarrival Times 
in Minutes by Day of Week 











95% Lower 


95% Upper 


Day 


Median 


Mean 


Std. Dev. 


Bound 


Bound 


Mon. 


25.00 


37.08 


40.77 


33.09 


41.07 


Tue. 


27.00 


39.12 


40.42 


34.98 


43.26 


Wed. 


25.00 


38.31 


42.20 


34.14 


42.49 


Thu. 


25.00 


39.65 


42.52 


35.31 


44.00 


Fri. 


30.00 


40.28 


39.95 


36.09 


44.47 


Overall 


25.00 


38.84 


41.18 


38.98 


40.70 



87 
Interarrivals by Time of Day 

When the interarrivals were compared by time of day using the historical data, the 
ANOVA revealed significant differences among the time intervals examined (F=19.63, 
p<0.001). The post hoc analyses indicated that there were significant differences between 
the morning interarrivals observed during the 9:00 a.m. to 1 1 :59 a.m. time intervals and 
the afternoon interarrivals observed during the time intervals from 12:00 p.m. to 4:59 p.m. 
(Table 5-13). Table 5-14 shows the descriptive statistics for each of the observed time 
intervals. Figure 5-5 graphically illustrates the interarrival trend by time of day. The 
shortest interarrivals occured in the morning starting at 9:00 a.m. with a mean interrarrival 
time of 17.97 minutes, or approximately four per hour. Interarrival times decreased 
throughout the morning until 1 : 00 p.m. during which time the interarrival time was 
approximately 30 minutes, or two per hour. After 13:00, the interarrival times leveled off 
at a mean of approximately 50 minutes, or a little better than one per hour. Since these 
differences were practically significant as well as statistically significant, the simulation ws 
programmed to generate simulated calls taking into consideration the hour of the day. 

Interarrival and Service Time Distributions 

The coefficient of variation (CV) is alternative measure of variability that often 
gives information about the form of a continuous distribution. It calculated by dividing the 
standard deviation by the mean. For the exponential distribution, the CV should be 1 .00 
regardless of the scale parameter (P), where P is approximated by the mean. Skewness is 
a measure of the symmetry of the distributions, and should be equal to 2.00 for an 
exponential distribution (Law and Kelton, 1991). The interarrival distribution was the 
most positive candidate for a good fit with the exponential distribution since it had a CV 
of 1.06 and a skewness measure of 2.01 (Table 5-15). However, total service time and 
service times by question group (i.e., corresponding to the three combined groups 



presented above) were also good possibilities for a fit with the exponentially distribution 
since their coeflFicients of variation and skewness measures were all fairly close to the 
desired values. It was unlikely that a goodness of fit test would indicate that the 
underlying distributions for the Time to a Take Call , the Approval Time, and the Time to 
a Return a Call were exponential given that the skewness measures were at least two times 
the desired 2.00 (i.e., Time to Take Call had a skewness of 4.64, Approval Time had a 
skewness equal to 9.20, and Time to Return a Call had a skewness equal 4.30.) The 
Kolmogorov-Smirnov goodness-of-fit test (K-S test) was used to calculate whether or not 
the interarrival and service time distributions were exponentially distributed. The 
computational procedures outlined by Law and Kelton (1991) were used to compute the 
value of D„, which in turn was used to calculate the K-S test statistic. This test statistic 
was then compared to a critical value (i.e., c"i-a) where: 



n = number of samples and i = i-th sample 

D"=max{i/«-F*(xO) (Formula 5-1) 

D"= max{F*(xi)-(i-l)/«} (Formula 5-2) 

D„ = max{D\D"} (Formula 5-3) 






^ r 05^ 

V«+0.26 + -^ 



> c"i-a (Formula 5-4) 



Formulas 5-1 through 5-4. 
Kolmogorov-Smirnov Test for the Exponential Distribution 



The critical value of the K-S test at an a level equal to 0.05 for an exponential 
distribution is 1 .094 (Law and Kelton, 1991). If the test statistic was larger than the 
critical value then the hypothesis that the distribution is exponentially distributed was 
rejected. Therefore, large values of the test statistic indicated a poor fit. None of the test 
statistics resulted in values indicating that the distributions were exponentially distributed 
(Table 5-16); however Total Service Time (i.e., the time from the end of the initial request 
until a response was given) was the closest with a test statistic of 1 .701 . 



89 



The Weibull distribution is another continuous distribution used extensively to 
model interarrival and service times, and has a shape similar to the gamma, exponential 
and Erlang distributions depending on its parameter values (Law and Kehon, 1991). The 
K-S test was computed for the Interarrival Time and Total Service Time distributions in 
order to discover if the Weibull distribution, or another distribution of similar shape, 
would provide a better fit than the exponential distribution. The critical value for a large 
sample K-S test at a equal to 0.05 for the Weibull distribution is approximately 0.874; 
however, the test statistics for the interarrival and total service time distributions were 
11.51 and 15.30, respectively. This indicated that the Weibull distribution is actually a 
poorer fit than the exponential distribution with respect to interarrivals and total service 
time. 

The exponential distribution could still probably be used in the simulation without 
greatly jeopardizing the validity of the model. Linear regression analysis between the 
observed and expected probability density functions indicated that the exponential 
distribution explains approximately 84.7% of the variance in interarrivals (P0=0.008, 
p 1=0.684) and 92.3% of the variance in overall service time ((30=0.008, (31 = 1.291), This 
implied that the exponential distribution would probably provide a reasonable estimate of 
the observed distributions, even though the observed data are not exponentially 
distributed. However, since GPSS/H allows user defined Sanctions, it was decided to use 
the empirical distributions to preserve accuracy. Frequency histograms for the interarrival 
times, total service time, and service times for the three general question categories are 
presented in Figures 5-6 through 5-10. 



Table 5-13. Significant P- Values for Interarrival Times by Hour of Day 


Hour 


9:00 - 
9:59 


10:00- 
10:59 


11:00- 
11:59 


12:00- 
12:59 


13:00- 
13:59 


14:00- 
14:59 


15:00- 
15:59 


16:00- 
16:59 


9:00-9:59 

10:00-10:59 

11:00-11:59 








0.039 


<0.001 
<0.001 
<0.001 


<0.001 
<0.001 
<0.001 


<0.001 
<0.001 
<0.001 


<0.001 
<0.001 
<0.001 



90 



Table 5-14. Descriptive Statistics for Question Interarrival Times 
in Minutes by Hour of Day 











95% Lower 


95% Upper 


Hour 


Median 


Mean 


Std. Dev. 


Bound 


Bound 


9:00-9:59 


15.00 


17.97 


23.81 


13.82 


22,12 


10. 


00-10. 


59 


20.00 


24.12 


21.39 


21.77 


26,47 


11. 


00-11. 


59 


25.00 


31.46 


27.17 


28.27 


34.66 


12. 


00-12. 


59 


30.00 


36.51 


33.88 


31.92 


41.10 


13. 


00-13. 


59 


35.00 


51.82 


50.82 


45.14 


58.49 


14. 


00-14. 


59 


30.00 


48.92 


50.46 


42.45 


55.39 


15. 


00-15. 


59 


30.00 


46.48 


51.16 


39.97 


52.98 


16:00-16:59 


30.00 


47.58 


42.64 


42.02 


53.35 


Overall 


25.00 


38.84 


41.18 


38.98 


40.70 




S 10 



N = 129 321 280 212 225 236 240 220 

9:00-9:59 11:00-11:59 13:00-13:59 15:00-15:59 

10:00-10:59 12:00-12:59 14:00-14:59 16:00-16:59 

Hour Received 



Figure 5-5. 95% Confidence Intervals of Interarrival Times in Minutes 

by Hour of Day 



91 





Table 5-15. 


Summary Statistics for Input 


Distributions 




Time 
Distribution 


Median 


Mean 


Std. 
Dev. 


Coefficient 
of Variation 


Skewness 


Data 
Source 


Interarrival 
Time 


25.00 


38.84 


41.18 


1.06 


2.01 


Historical 


Time to Take 
Call 


3.00 


3.97 


3.61 


0.91 


4.64 


Concurrent 


Total Service 
Time 


75.00 


106.40 


98.92 


0.93 


1.23 


Historical 


Service Time 
Group I 


15.50 


29.05 


36.21 


1.24 


2.26 


Concurrent 


Service Time 
Group 2 


30.00 


48.42 


54.41 


1.12 


2.25 


Concurrent 


Service Time 
Group 3 


30.00 


64.35 


58.51 


0.91 


1.94 


Concurrent 


Approval 
Time 


1.00 


2.47 


7.63 


3.08 


9.20 


Concurrent 


Time to 
Return Call 


3.00 


4.25 


5.36 


1.26 


4.30 


Concurrent 



Table 5-16. Kolmogorov-Smirnov Tests for Exponentially Distributed Variables 



Time 










Test 


Distribution 


N 


D„^ 


D„ 


Dn 


Statistic 


Interarrival Time 


1889 


0.096 


0.038 


0.096 


4.209 


Total Service Time 


1477 


0.044 


0.038 


0.044 


1.701 


Service Time Group J 


164 


0.143 


0.044 


0.143 


1.854 


Service Time Group 2 


164 


0.232 


0.129 


0.232 


3.021 


Service Time Group 3 


133 


-0.092 


0.424 


0.424 


5.005 



92 



400' 



300 



200 



loo- 



s' 

c 



i oj 



■^ 



5 ' 




Std. Dev = 41.1J 
Mean = 39 
N = 1889.00 



^^/O^ \ \ \ % % % % % % % % % % 



Interarrival Time (Minutes) 



Figure 5-6. Frequency Histogram of Historical Interarrival Times 



200 




Std. Dev = 98.92 
Mean = 106 
N = 1476.00 



^ ■'?'^ '%> <? ^ "v^ ^ '-?.7 i^ 's?o ■s^^- 'S'b 



'^ ^O "b^ V^ Vg '^ '-a? >D ''^ "-sJd '^ "&. 

^ '^o "%- ^o °& "v ^ ^o '%■ "b 



Service Time (IVIinutes) 



Figure 5-7. Frequency Histogram of Historical Total Service Times 



93 



c 
§■ 




Std. Dev = 36.21 
Mean = 29 
N = 1 64.00 



^- ^^ <5>^ % <?, <*, ^cB, ^/. 



"^ ""^ ^^^ ^<?, ^-^ ^'^. -^< 



'^ ^^ 



Service Time (Minutes) 



Figure 5-8. Frequency Histogram of Service Times for Question Group One 



60' 



2 * 



0) 




i 1 - 



std. Dev = 54.41 
Mean = 48 
N = 201.00 



"v^"'^^ "^^.^'^^ '-^^ '^^ ^*^ "'-'i? % % % 
'^ ^& & ^o^ 'a, V^ ^^ ^^ ^^ ~p^ V. 

Service Time (Minutes) 



Figure 5-9. Frequency Histogram of Service Times for Question Group Two 



94 



03 



a- 

03 



iT 




Std. Dev = 58.51 
Mean = 64 
N = 1 33.00 



O '^^ <5\, >%) ^ <<- -^ v^ v^ x^i, cK, 

V "^^ "^^ ''^. V? 49 *=& -b x> O '^O 

%• '^ ■% >-> ~^ V^ '^ >.^ '^ '^y 

O o^ Oj, i^ ^ O^ ci-ijv /^ 

Service Time (Minutes) 
Figure 5-10. Frequency Histogram of Service Times for Question Group Three 

Part Two: Student and Director Interviews 

Twelve personal interviews were conducted from May, 1997 to July, 1997. The 
interviewees consisted of the two co-directors for the DIPRC and ten Pharm.D. students 
in their third rotation week. The interviews were semi-structured and were conducted 
using interview outlines (Appendices J and K). 

Director Interviews 



There was a considerable amount of consistency between the Student and co- 
director responses to the questions asked during the interview process, especially 
concerning the hours of operation, lunch times, and the process for prioritizing and 
answering questions. However, the co-directors were asked some additional questions 



95 



that the students were not asked. In addition, some questions were asked of the co- 
directors in a way that depended largely on extended experience working in the DIPRC. 

The first question that the co-directors were asked was "Who works in the 
DIPRC?" They indicated that typically there were two co-directors, a drug information 
resident, and three Pharm.D. externship students working in the DIPRC. One of the co- 
directors is primarily responsible for the operational aspects of the DIPRC and the other is 
responsible for the educational component of the drug information rotation. However, 
they both stressed that their duties overlapped considerably, especially regarding the 
educational aspects of the rotation. For about two-thirds of the year, the DIPRC has a 
drug information resident. The drug information resident does spend some time as a front 
line person answering phone calls; however, they eventually move on to fulfill a more 
supervisory role with regard to the externship students. Each month, three new Pharm.D. 
externship students work a four week rotation. The DIPRC is staffed 52 weeks a year by 
these Pharm.D. students, however, there may be only two students during the Christmas 
break. Additionally, two months out of the year a pharmacy practice resident spends some 
time in the DIPRC; however, their role varies considerably. Usually, the pharmacy 
practice residents are working on other projects, so only from about a quarter to one half 
of their time is spent on the front line answering questions. Occasionally, the DIPRC has 
intern students and foreign visitors. These students usually just observe and help where 
possible, but some are advanced enough to take calls and answer questions. 

The students are encouraged to work on more than one question at a time, 
meaning that the students are required to reprioritize their work as necessary and be 
eflficient with their time. Additionally, the co-directors stated that students were required 
to fill out a data sheet for every question, and to have every question approved. 

Furthermore, the co-directors stated that the students were indeed required to 
interrupt their work and answer incoming telephone calls, and added that they had a "three 
ring rule" where the telephone should be answered within three rings. One of the co- 



96 



directors called this "the greatest irritant of all" for the students working in the center 
because it requires them to redirect and refocus their attention on something new. 

The co-directors agreed with the students concerning the easiest types of questions 
to answer, citing drug identification and availability questions as requiring the least amount 
of work. When asked about the most difficult types of questions that students typically 
have trouble answering, they indicated three kinds of questions. First, vague questions are 
frustrating for the students because there is no way to form a concise answer. Second, 
questions that actually have no answer give students trouble because they do not know 
when to quit searching. Third, questions in which the student has to synthesize 
information from multiple sources, where no one has already written the answer down 
"nice and neat". 

Concerning how quickly students become skillflil at answering questions, the co- 
directors indicated that the learning curve was steep. There are tremendous improvements 
in the first few days of the rotation, and most students understand the process at the 
beginning of their second week. 

The co-directors felt that the most valued aspect of the service that they provide is 
a high quality answer using resources that the caller does not have access to or time to 
retrieve, and that is delivered in a timely fashion. When asked about the least valued 
aspect of their service, the co-directors indicated that many of the callers often do not 
want to give all of the necessary background information. Callers do not always 
understand that the background information is necessary to deliver a useful answer. One 
of the co-directors indicated that the background information was important in delivering 
a good response: "The more specific the question the more specific the answer. The more 
general the question the more general the answer, and what you really want as a 
practitioner is a specific answer." 



97 



When asked about the factors that they felt influenced the callers' perceptions of 
the quality of the service, the co-directors indicated that the way the student interacts with 
the caller and the callers expectations are the two most important issues: 

One of my theories. . is that [finding] information is really only 10 or 20 
percent of what [students] do. The packaging, the ribbons and bows, the 
communication dynamic is really the other 80 to 90 percent. If [the 
student] that [the caller interacts] with really takes an interest in the 
question, seems to understand what they are asking. . . , provides a well 
documented and thoroughly researched [answer], and communicates that 
[answer] back in a clear and understandable way, that is the perfect 
scenario. 

I think [callers' perceived service quality] is multifactorial. I think a lot of it 
is their expectations. I'm big behever in expectations. A first time caller 
who has no expectations. . thinks we have a great service [if they get any 
kind of information] . A frequent caller who [uses] our service a lot and 
[has] high expectations to begin with. . .1 think that they could get pretty 
good information and think it is mediocre. 

However, as one of the directors pointed out, if the callers are not happy with the 
quality of the service, they tend not to call back 

When asked about whether delays in service influence the callers' impressions of 
quality, the comments from the co-directors were mixed. One co-director stated that he 
did not believe a delay would cause a decrease in the perceived quality, because they try to 
safeguard any misunderstandings about when the question will be answered. Students are 
instructed to contact callers when necessary and update them on progress of the question. 
However, the other director felt that callers were very sensitive to delays, stating: 

[Callers are] [v]ery sensitive to [delays]. I mean, whether they really need 
it or not, if they say they need it today, and we don't get it to them today, 
that's a major failure on our part. 

With regard to the future of the DIPRC, both co-directors felt some pressure from 
the fact that flmding from the hospital for the service has been dropping off for the last 
few years. Eventually, they felt that some decisions would have to be made regarding 



98 



whether to Umit the service to a select group of callers or whether or not the service 
should be marketing on a fee-for-service or contractual basis. 

If the DIPRC needed to add staff to support increases in question volume, the co- 
directors felt it would be possible to increase the staffing level from three students to as 
many as five students a month. More than five students may be possible in the fiature 
given the fact that more students are coming through the pharmacy program and there are 
fewer rotation sites from which to choose. The co-directors added that the primary 
limitation with using more students is the limited supervisory resources. They felt that 
more than five students would be difficult to support, both operationally and 
educationally, without at least one more drug information resident. One of the co- 
directors felt the ideal solution would be to hire a fiall time staff pharmacist to help answer 
phones, supervise students, and approve responses; however, this would require 
considerable financial resources not currently available. In addition, current space 
restrictions prevent more than five students from working productively in the DIPRC. 

Student Interviews 

The first question that the students were asked was to describe their role in the 
DIPRC. All of the students answered similarly, indicating that they were responsible for 
taking calls from practitioners, researching questions, and providing answers to those 
questions back to the callers. There was no indication that there are differences between 
the students in terms of their responsibilities or the jobs that they were asked to complete. 
All of the students uniformly stated that they all had equal responsibility for completing the 
same work. As one student stated, "Everybody down there is treated the same. We are 
all expected to do the same things. It is a learning experience for us, but at the same time 
we are providing a service. " 



99 



Students are required to work in the DIPRC from 8:00 a.m. until 5:00 p.m. each 
day. The DIPRC is typically open to take calls from 9:00 a.m. until 5:00 p.m., Monday 
through Friday, with the exception of certain holidays such as Thanksgiving and 
Christmas. The students rarely stay past 5:00 p.m. At closing time, the students reach a 
stopping point, post whatever they are working on onto a bulletin board designated for 
new and unfinished questions, and then depart the center. The latest time that a student 
has stayed beyond 5:00 p.m. was 5:25 p.m.; however, most of the students stated that they 
rarely stay past 5:05 p.m. 

The students are involved in some morning activities that potentially interfere with 
how soon they can actually begin to work on questions received from callers. On 
Tuesday, Thursday, and Friday the students must participate in a journal club where they 
are asked to read, analyze, and present articles from the literature. On Wednesday, a 
Quality Assessment procedure is conducted where a 10 percent random sample of the 
previous week's questions are reviewed for clarity, accuracy, and response quality. These 
activities begin at 8:00 a.m. and usually conclude between 9:00 a.m. and 10:00 a.m., with 
the most common concluding time of about 9:30 a.m. 

The students were asked what happens during this overlapping time from 9:00 
a.m. until approximately 9:30 a.m. when they are occupied in one of these meetings. They 
stated that the telephones were "religiously" opened at 9:00 a.m. If the telephone rings, 
any student not presenting may get up and answer the telephone. If the question is an 
urgent question then the answering student will immediately begin work on the question. 
If the question is not urgent (i.e., not needed within the hour), then the question is posted 
on the bulletin board for later retrieval. 

All of the students take a lunch; however, the amount of time taken seemed to vary 
considerably from 10 minutes to the full hour allowed. Two students stated that they 
usually take less than an hour and often spend their lunch in the DIPRC. Five of the 
students stated that they usually take their lunch outside the DIRPC and that lunch, for 



100 



them, lasts from 20 to 45 minutes. Only one student took the full hour for lunch each day. 
When prompted, all of the students indicated that they would help answer the telephones 
and work on urgent questions if they decide to take their lunch in the DIPRC. Lunches 
are usually staggered from 1 1 :00 a.m. until 1:30 p.m. The students also try to schedule 
their lunches so that no one is left by himself or herself to answer the telephones. 

All of the students indicated that their primary role in the DIPRC is to take calls 
from healthcare practitioners and write down their questions. After taking down the 
question they are instructed to ask some "probing" questions in order to find out more 
specifically where the question originated and what resources the practitioner had already 
consulted. Also, they obtain the various demographic information necessary to complete 
the data sheet (i.e., data collection form) such as subscriber status, profession, and contact 
information (see Appendices H and I). Most of this information is taken on a separate 
sheet of paper and then transcribed onto the data collection form after the call is 
completed. The call is then logged into a logbook for tracking purposes. 

At this point, the students described three possible routes of action. First, if the 
question is not urgent they will tack the question onto the bulletin board with the other 
questions that have not yet been started. Second, if the question is urgent and the 
question that they are currently working on is not urgent, then they will stop working on 
the current question and begin on the new question. Third, if they were already working 
on an urgent question, they would ask one of the other students to take the question. 

The next step is to research and write a response to the question based on the 
information they have found. Following this, they bring this response to one of the co- 
directors or a qualified resident for approval. If the co-directors indicate that the response 
needs more work then they continue to research and re-work the answer. Once the 
answer is approved, they respond to the caller, usually by telephone, but sometimes by fax 
or letter. Finally, the call is logged out of the logbook and the data sheet is placed in the 
Quality Assurance bin. 



101 



All of the students stated that data sheets are filled out and approval is obtained for 
each question asked, regardless of the question's apparent simplicity, such as callers asking 
for phone numbers to pharmaceutical companies. Sometimes, complex questions are 
broken down into several smaller questions. Furthermore, if a call comes in, the students 
are instructed to interrupt their work and answer the telephone. As one students stated: 

[It's] pretty much our duty if the phone rings to answer it regardless of 
what we are doing unless we happen to be on the Hne with someone 
else. . . Once we receive the call, whether we'd go on to work on it right at 
that moment is. ..a subjective thing. The students are also in charge of 
prioritizing calls, in terms of what gets worked on when. 

The students were consistent in their answers when asked how many questions at a 
time they typically handle. All of the students said that they never work on more than one 
question at the same time, but may work on other questions while they are waiting for 
return phone calls or when higher priority questions arrive. As one student described: 



It's not like you have two pieces of paper beside you and you have books 
on each subject, that's not how we work. . .One [question] takes priority 
over the other. It's never like anyone has four or five questions running 
around. . . [We always] put them back on the [bulletin] board to keep them 

straight. 



When asked how they decide which question to work on next, all of the students 
stated that the first consideration is given to the "time frame" that the requestor presented 
to them. Primarily, the students tend to pay primary attention to the "Stat", "Today", 
"Date", and "No Rush" categories found on the data sheets. From the statements made 
regarding priorities during the interviews, there seemed to be three priority levels. First, 
"Stat" questions tend to receive the highest priority along with "Today" questions needed 
within one hour. Second, "Today" questions without a particular time and "Date" 
questions needed on that particular day are given the next highest consideration. The 



102 



lowest priority is given to "Date" questions not needed on that particular day and "No 
Rush" questions. Within these particular categories, those questions that are patient 
related as opposed to general information are given priority. There was no evidence 
suggesting that students give special priority to questions based on the callers' profession 
or subscriber status (i.e., subscriber, non-subscriber, or health center employee). 

The students indicated that once they begin to answer a question, they usually also 
complete the question. The student who takes a call often feels obligated to answer that 
question. Two students estimated that for 60 to 90 percent of the questions that they 
personally recorded, they also completed the answer. As one student explained: 



It's almost easier just to finish up answering your own questions than try to 
get involved in answering someone else's questions. . . [They] may have 
done a lot of research [on the question] . . . and looked in areas that [are] not 
written down. . . [Usually] when people are doing a question, there is. . . a 
thought process going on. 



The students uniformly stated that telephone numbers to pharmaceutical 
companies and drug identification questions were the easiest questions to answer. 
However, the question types that students selected as the most difficult types to answer 
tended to vary. Four students mentioned that questions regarding new and foreign 
medications were the most diflficuh. Three students stated that the most difficult questions 
involved treatments for unusual disease states or when little is known about the therapy. 
Two students said that pharmacokinetic and dosing questions were the most difficult. 
Additionally, two students said that broad, non-specific questions were also very difficult 
because it is hard to guess what the caller actually wants to know about the topic. 

Six of the students said that they were comfortable answering questions after their 
first day in the DIPRC. Three of students said that after the first week they were 
comfortable. One student was still not comfortable answering questions even after their 
third week. Some of the students indicated that they are still learning how to answer 



103 



certain types of questions, especially those where the caller is asking their professional 
opinion that may only be loosely supported by the literature. 

The students felt that "accurate information" and "correct answers" were the most 
valued aspect of the service that they provide. As one student stated, "We provide a sense 
of mental security for [the callers]." The students feh that the callers did not always like 
to give their demographic information or having to wait for an answer. They felt that 
some of the callers misunderstood the kinds of information available in the DIPRC and 
how quickly drug information could be retrieved. Also, they felt that callers wanted 
concise answers and do not appreciate "additional information" not directly relevant to the 
question asked. When asked specifically about factors that most influence the callers' 
perceptions of the quality of the service the following five factors were mentioned. First, 
how the question was answered was mentioned six times. This relates to how 
systematically the question was answered and how the response was presented. Second, 
timeliness or promptness of the answer was mentioned four times. Third, the number and 
quality of the references used was also mention four times. Fourth, the attitude of the 
student when the caller first calls in to the center was mentioned two times. Fifth, the 
accuracy (i.e., correctness) of the information was mentioned once. Thus, the most 
important factor from the students' perspective was the strength and style of the given 
response. As one student described: 

You can tell them, 'Well I don't know the answer to your question', or you 
can say 'I searched the literature and after an exhaustive search, I'm unable 
to find anything supporting the use of this drug for this disease.' The 
diflference between those answers [is] one of them says I don't care [and] 
the other one says I tried but. . .1 couldn't find anything for you. In other 
words, how you word the answer is important, and what you can use to 
cite as your answer is important 

The students also felt that timeliness and promptness was important to callers' 

perceptions of service quality; however, when asked more specifically about delays in 



104 



service, there were some mixed results. Five students indicated that callers probably were 
not very sensitive to delays in service. Four of the students said that the callers ranged 
from sensitive to very sensitive to delays in service. One student stated that the sensitivity 
depended on the person. If the caller has worked in drug information before, then they 
often tended to be a little more understanding. Some of the students felt that managing 
the callers expectations regarding when they were going to receive a response was 
important in maintaining the callers satisfaction, as two students stated: 

People who are sensitive are those [who] call wanting [an] on the spot 
dosing recommendation. They are a little frustrated. . that you can't just 
open your book and tell them the answer right there. Those are the ones 
where you can sense the frustration. 

Time is very important to callers. That is how I feel about it. Like when 
[the caller has] a patient [in their store or office] who has a question for 
them. . they give us a call hoping to get an answer before the patient leaves. 
That constrains them to certain time limit. Which may make them feel like 
they are more dependent on us. When they are more dependent they are 
more likely to be disappointed if we don't provide. 

Lastly, when asked about what could be done to improve the process at the 
DIPRC, none of the students felt that they were being hindered in any way from doing 
their job, and were generally very pleased with their experiences during the rotation. 
However, six general suggestions were made. First, all references should be up to date. 
Second, callers should be informed that the DIPRC is staffed by students who cannot 
answer questions right away. Third, consider shortening the time allowed for lunch. 
Several of the students felt that an hour for lunch was excessive. Fourth, at least two 
more computers with upgraded software should be made available. Internet access should 
be provided on all computers since that is often the only way to find information on herbal 
and folk remedies. Fifth, a better orientation in the beginning regarding where to find 



105 



information on foreign drugs would be helpful in reducing the amount of time that it 
initially takes in finding information on these drugs. 



CHAPTER 6 
MAIN RESULTS 



Overview 



This chapter presents the third and fourth parts of the data analysis. The third part 
reports the results of the data obtained from the service quality questionnaire. The validity 
and reUability of the service quality questionnaire is discussed first, followed by the results 
of the hypothesis tests related to the variables derived from the questionnaires and data 
collection forms. The fourth part describes the development, verification, and validation 
of the simulation model. It also presents reports the results of the simulated experiments 
used to explore the three specific research questions. 



Part Three: Relationships Among PSQ, OSQ, Behavioral Intention. Perceived Service 
Time. Actual Service Time and Service Delays 



Response to Questionnaire 

Three hundred twenty nine questionnaires were mailed out during the initial 
mailing. Approximately one week after the initial mailing a reminder postcard was sent 
out to non-responders urging them to mail in the completed questionnaire. The post card 
also asked subjects to contact the researcher if they had not received a survey. Two 
weeks after the post card was mailed, non-responders were contacted by telephone to 
remind them to send in their questionnaire or to ascertain their reasons for not responding. 

Of the 332 questionnaires that were sent out, 203 were completed and returned, 
resuhing in a raw response rate of 61 . 1%. Eight mailed questionnaires and one postcard 
were returned as undeliverable. When attempts to contact non-responders by telephone 

106 



107 



were made, it was learned that two subjects were on vacation and ten subjects no longer 
worked at the location where the questionnaire was mailed; therefore, these subjects never 
received the survey. Adjusting the response rate for these 21 subjects using the procedure 
proposed by Dillman (1978) resulted in a revised response rate of 65.3% for the main 
questionnaire. 



Table 6-1. Questionnaire Sample Description 



Characteristic 


Frequency (%) 


% Responding 


Subscriber Status: 






Health System 


55(27.1) 


59.8 


Subscriber 


68(33.5) 


72.1 


Non-Subscriber 


80 (39.4) 


54.0 


Profession: 






R.Ph./Pharm.D. 


141 (69.5) 


68.4 


Physician 


13 (6.4) 


46.4 


R.N./Nurse Pract. 


16(7.9) 


57.1 


Other 


33(16.2) 


47.8 


Frequency of Use: 






First Time User 


49 (24.4) 




1 -2 Times A'ear 


26(12.9) 




3-5 TimesA:"ear 


46 (22.9) 




5-10 Times A^car 


40(19.9) 




10-15 TimesA^ear 


17(8.5) 




>15 Times A^ear 


23(11.4) 




Received After: 






First Mailout 


119(58.6) 




Post Card 


61 (30.1) 




Telephone Call 


23(11.3) 





Of the total number of responses, 55 (27.1%) were employees of the UP Health 
System, 68 (33.5%) subscribed to the DIPRC, and 80 (39.4%) were non-subscribers 
(Table 6-1). Of these, subscribers had the highest response rate of 72. 1% and non- 
subscribers had the lowest response rate of 54.0%. As expected, the majority of the 



108 



responses came from pharmacists (69.5%). Pharmacists also had the highest raw response 
rate (68.4%), while physicians had the lowest response rate (46.4%)). 

Questions arrived within one of three response intervals. First, 119 responses 
(58.6%) were received before the reminder postcard was sent out. Second, 61 (30.1%) of 
the responses were received after the postcard was sent but before the reminder telephone 
call was made. Third, the remaining 23 (1 1 .3%) questionnaires were received after the 
telephone reminder. 

Assessment of Non-Response Bias 

The most common reason given for not responding to the survey was lack of time. 
Fourteen subjects stated that they had not had the time, but would fill out and mail the 
questionnaire soon (Table 6-2). Of these fourteen, seven (50.0%) actually returned the 
questionnaire as promised. Another 12 subjects stated that they had not yet received a 
questionnaire in the mail, and asked to have another questionnaire sent to them. 
Questionnaires were re-mailed to these individuals, and eight (66.7%) were returned. 

Two subjects stated that they never used the service, even though the data sheet 
listed their name and address. The only ascertainable reason for this was that subscribers 
sometimes allow other co-workers to borrow their name and subscriber number to call the 
DIPRC. Two subjects did not fill out the questionnaire because they did not feel it appHed 
to them since they were not health professionals. Only two subjects contacted gave any 
direct indication that they absolutely did not want to fill out the questionnaire, and they did 
not want to give any specific reasons for their decision. 

ANOVA procedures were conducted for subscriber status, profession, and 
response interval using perceived service quality (PSQ) and overall service quality (OSQ) 
as the dependent variables. None of the analyses performed resuhed in significant overall 



109 



F values (Table 6-3). Therefore, there was no reason to suspect response bias significant 
enough to damage the external validity of the results derived from the questionnaire data. 



Table 6-2. Reasons for Not Reponding 



Reason 



(% 



Frequency 
Returned)* 



Haven't had time, but will do it soon. 

Did not receive in mail (mail another). 

Person no longer works there. 

Mailed the questionnaire already. 

Person is a part-time worker 

Questionnaire did not apply to me. 

Not interested in filling out questionnaire. 

Did not use your service. 

On vacation or maternity leave 



14 (50.0%) 
12 (66.7%) 
10 
7 
5 
2 
2 
2 
2 



*Note: Where present, percent returned equals the response rate after the telephone 
follow up attempt was made to reach the subject. 



Table 6-3. Results of One-Way ANOVA Procedures Measuring Response Bias 



Measure 



Subscription 

Status 

F (p-value) 



Profession 
F (p-value) 



PSQ 
OSQ 



1.288(0.28) 
0.505 (0.60) 



0,918(0.43) 
0.617(0.61) 



Response 

Interval 

F (p-value) 



0.107(0.90) 
0.054(0.95) 



Dimensionality of the SERVPERF Sub-Scale 



The dimensionality of the SERVPERF sub-scale measuring PSQ was assessed with 
principal components factor analysis using a Varimax rotation. The analysis resulted in 
four factors with eigenvalues of greater than one, explaining 62.0% of the variance. The 
rotated component matrix is presented in Table 6-4, 

A factor loading is essentially the correlation between the item and the factor 
(Crocker and Algina, 1986). The majority of the factor loadings for each factor were 
close to or above 0.600. Items five "Background Noise", six "Written Materials", seven 
"Speak Clearly", and twenty-one "Know Needs" had the lowest factor loadings of -0.390, 



no 



0.506, 0.478, and 0.478, respectively. Of these, however, only item five's factor loading 
was below the deletion threshold of 0.400. 

Item five also had a relatively low communality (Table 6-5) of 0.288, mdicating 
that the common factors did not explain the majority of the variance for this item. The 
low factor loading combined with the low communality suggested that item five was not a 
valid item. Forcing the data to fit five factors confirmed this conclusion, since item five 
was the only item to load on the fifl;h factor. Additionally, the positions of the other item 
loadings remained unchanged. It was therefore decided to removed item five from the 
model. 





Table 6-4. Initial Rotated Component Matrix of Main Questionnaire Responses 
for the SERVPERF Sub-Scale (N=118)^ 




Item 1 2 3 


4 


01 2R-Individual Attention 
1 8R-Personal Attention 
Q13R-When Performed 
Q14R-Prompt Service 
Q16R-TooBusy 
015R-Willingness to Help 
Q22R-Best Interests 
07 -Speak Clearly 


0.892 
0.861 
0.711 
0.708 
0.626 
0.624 
0.506 
0.478 


0.301 




0.318 
0.455 

0.411 


Q8-Promised Time 
Oil -Provides in Time 
QlO-Dependable 
Q9-Sympathetic & Reassuring 
06-Written Materials 


0.312 
0.375 


0.827 
0.820 
0.738 
0.714 
0.582 




Q20-Safe Interactions 
Ql9-Polite Employees 
01 7-Trust Employees 




0.901 
0.894 
0.634 


023R-Operating Hours 
04-Necessary Resources 
Q21R-Know Needs 
Q5R-Background Noise 


0.460 


0.465 
0.308 




0.617 

0.581 

0.478 

-0.390 



"R" next to the question number indicates that the question was reverse coded for analysis purposes. 



Ill 



Table 6-5. Main Questionnaire Communalities for the SERVPERF Sub-Scale 



t IID'f 



(N=118)^ 




Item 


Communality 


Q4-Necessary Resources 


0.571 


05R-Background Noise 


0.288 


Q6-Written Materials 


0.385 


07 -Speak Clearly 


0.360 


08-Fromised Time 


0.762 


09-Sym.pathelic & Reassuring 


0.572 


QlO-Dependable 


0.774 


QU -Provides in Time 


0.846 


Q12R-Individual Attention 


0.822 


Q13R-When Performed 


0.572 


014R-Prompt Service 


0.710 


015R-Willingness to Help 


0.576 


Q16R-TooBusy 


0.615 


Ql 7-Trust Employees 


0.500 


Q18R-Personal Attention 


0.812 


019-Polite Employees 


0.828 


O20-Safe Interactions 


0.881 


Q21R-Know Needs 


0.548 


022R-Best Interests 


0.510 


Q23R-Operating Hours 


0.471 



'R" next to the question number indicates that the question was reverse coded for analysis purposes. 



Table 6-5 shows the final rotated component matrix after removing item five fi"om 
the analysis. This model also revealed four factors with the explained variance improving 
slightly to 64.6%. A scree plot shows that, indeed, the eigenvalues for the components 
begin to level off after four components (Figure 6-1). Since the eigenvalue for the fifth 
component was 0.906 and not close to 1 .00, the four-factor solution was selected as the 
optimum factor structure. 

Five dimensions of perceived service quality composed the original SERVQUAL 
scale (Parasuraman et al., 1988): tangibles (TA), responsiveness (RS), reliability (RE), 
assurance (AS); and empathy (EM). Next to the factor loading for each item listed in 
Table 6-6 is a two letter code corresponding to the factor on which the item should load 
as hypothesized by the original SERVQUAL dimensions. 



112 



Interestingly, there is some correspondence between the original five dimensions 
reported by Parasuraman et al. (1988) and the five factors presented in Table 6-6. All of 
the items representing assurance and reliability loaded on separate dimensions. However, 
items originally belonging to the responsiveness dimension loaded with three of the items 
from the empathy dimension (item 12 "Individual Attention", item 18 "Personal 
Attention", and item 22 "Best Interests"), perhaps indicating that callers do not see these 
domains as separate constructs. The remaining two empathy items (item 23 "Operating 
Hours" and item 21 "Know Needs") loaded on factor four with item four "Necessary 
Resources." Additionally, the new items replacing the tangibles dimension did not load 
together as expected. Item seven "Speak Clearly" loaded on factor one with the empathy 
and responsiveness items. As mentioned above, item four "Necessary Resources" loaded 
on factor four. Item six "Written Materials" loaded on factor two, which was defined by 
the reliability items. 

Thus, there is some evidence suggesting that PSQ is a multi-dimensional construct, 
and that the five dimensions suggested by Parasuraman et al. (1985; 1988) might have 
merit in the drug information setting. However, it must also be recognized that the 
proposed five dimensions were not fully replicated. In addition, other researchers have 
also had difficulty duplicating these dimensions (Babakus and Boiler (1992); Babakus and 
Mangold (1992); Brown et al. (1993); Carman (1990); Cronin and Taylor (1992); Headley 
and Miller (1993)). Furthermore, the component structure obtained from the factor 
analysis conducted on the main questionnaire responses did not correspond to the 
component structure achieved from analyzing the responses to the pre-test. Therefore, 
there is not enough evidence to draw reliable conclusions regarding the dimensionality of 
PSQ as measured by the items in the SERVPERF scale. Therefore, for the purposes of 
this study, the items in the SERVPERF scale will be evaluated as a uni-dimensional 
measure of PSQ. 



113 



Table 6-6. Final Rotated Component Matrix of Main Questionnaire Responses 
for the SERVPERF Sub-Scale (N=120) ^* 



Item 


1 


2 3 4 


Q12R-Individual Attention 


0.884 (EM) 




018R-Personal Attention 


0.853 (EM) 




01 3R-When Performed 


0.712 (RS) 




QIAR-Prompt Service 


0.703 (RS) 


0.331 


QI6R-TooBusy 


0.619 (RS) 


0.465 


015R-Willingness to Help 


0,616 (RS) 


0.301 


Q22R-Best Interests 


0.493 (EM) 


0.445 


Q7 -Speak Clearly 
08-Promised Time 


0.479 (TA) 


0.304 






0.838 (RE) 




1 1 -Provides in Time 


0.306 


0.812 (RE) 




OlO-Dependable 


0.369 


0.733 (RE) 




Q9-Sympathetic & Reassuring 




0.699 (RE) 




Q6-Written Materials 
O20-Safe Interactions 




0.522 (TA) 








0.904 (AS) 




Q19-Polite Employees 






0.902 (AS) 




01 7-Trust Employees 
Q23R-Operating Hours 






0.623 (AS) 








0.748 (EM) 


Q4-Necessary Resources 




.410 


0.657 (TA) 


Q2IR-Know Needs 


0.454 




0.498 (EM) 



"R " next to the question number indicates that the question vias reverse coded for analysis purposes. 
* The letters in parentheses indicate the dimension on which the item loaded in the original 
SERVQUAL research conducted by Parasuraman et al. (1988). Where EM=Empathy. 
RS=Responsiveness, RE=Reliability, AS=Assurance, and TA=Tangibles. 




m 

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 

Component Number 

Figure 6-1. Scree Plot of Main Questionnaire Responses 
(Variables - 20; N=141) 



114 
Reliability of Sub-scales 

Reliability estimates were obtained using Cronbach's alpha for three sub-scales 
measured by the questionnaire; (1) PSQ (items 4 through 23), (2) service time perceptions 
(items 24 through 27), and (3) behavioral intention (items 33 and 34). Alpha was not 
applicable to OSQ since it is only a single-item measure. 

The 20-item SERVPERF sub-scale had an excellent initial alpha of 0.9053. 
Question five "Background Noise" was the only item with an item-total correlation of less 
than 0.30. Question five's item-total correlation was 0. 1601, indicating that this question 
did not contribute well to the scale (Table 6-8). This is evidenced by a significant 
improvement in alpha if the item were to be deleted (from 0.9053 to 0.9137). Item five 
was therefore removed from the SERVPERF sub-scale, resulting in a final scale composed 
of the 19 remaining items. The final PSQ sub-scale alpha compared well to both the pre- 
test alpha (0.8873) (Table 6-9) and the alphas found by Cronin and Taylor (1992) (0.884 
to 0.964), indicating that the reliability of the SERVPERF items are relatively consistent 
across service settings. 

The alpha for the service time perceptions sub-scale was 0.8322, and the alpha for 
the sub-scale measuring behavioral intention was 0.8997 (Tables 6-9). Therefore, the 
internal consistency of these two scales were also considered to be very good. 
Furthermore, examining the item-total correlations of the perceived service time scale did 
not reveal any items of questionable value (Table 6-8). 

Descriptive Statistics of Questionnaire Measures 

Table 6-10 lists the descriptive statistics (i.e., number, minimum, maximum, 
median, mean, standard error, and standard deviation) for the study variables that were 
derived from the questionnaire. First, the mean value of the PSQ scale was 35.70 points 
(n=120, s=l 1.76) (Table 6-10). This meant that the average score on each of the variables 



115 



making up the PSQ scale was approximately 2.00 (35.70 / 19=1.92), as evidenced by 
Table 6-11 , Although the possible range for the PSQ score was between 19 and 133, the 
lowest PSQ score was a 19 and highest observed PSQ score was a 93. Individually, the 
two items that scored best on the PSQ scale were items four "Necessary Resources" and 
fifteen "Willingness to Help" (Table 6-11). This suggested that, overall, respondents felt 
that DIRPC has access to the resources necessary to answer their questions (mean=1.60, 
s=0.74) and that the center was very willing to help them find answers to their questions 
(mean=1.63, s=0.89). Item twenty-three "Operating Hours" scored the worst on the PSQ 
(mean=2.29, s=1.21), suggesting that respondents were not as content with the operating 
hours of the service as with the aspects of perceived service quality measured by the other 
items. 

Second, the average OSQ score was 1.74 (n=193, s=0.73), indicating that 
respondents generally felt that the overall service quality of the DIPRC was "Very Good". 
The best score given was a one (i.e., "Excellent") and the lowest score given was a five 
(i.e., "Poor"). Third, perceived service time was measured by four items on the 
questionnaire. On average, respondents agreed that the response time for their most 
recent question was acceptable (n=201, mean=1.96, s=1.12). Similarly, respondents 
tended to disagree with the statement that by the time they received the information it was 
no longer usefial (n=201, mean=5.85, s=1.36). However, respondents were neutral when 
asked if they wished the DIPRC would provide quicker responses to questions (n=197, 
mean=4.3 1, s=l .77). Generally, the respondents felt that the response time was a little 
shorter or equal to what they expected (n=200, mean=3.58, s=1.34). Third, the results 
regarding the respondents' intentions to use the DIPRC in the fiature (item 33) and 
recommend the service to a colleague (item 34) were very positive. Overall, the 
respondents indicated that not only did they intend to use the service in the fixture (mean 
1.46, s=0.66), they would also recommend the service to a colleague (mean=1.38, 
s=0.55). 



116 



Since perceived service quality is a subjective measure, it is important to describe 
results in terms of practical as well as statistical significance. The standard error for PSQ 
was 1.07 points, which translated into a 95% confidence interval around the PSQ mean of 
plus or ramus approximately 2. 10 points. Since the width of this confidence interval is 
about 4.40 points, it was decided that mean differences of five points or more on the 19- 
item PSQ scale would be practically significant. For overall service quality, the standard 
error was 0.05 points, translating into a confidence interval width of about 0.20 points. 
Therefore, it was decided that mean differences of at least 0.25 a point on the OSQ 
measure would be a conservative estimate of practical significance. 



Table 6-7. Item-total Statistics for SERVPERF Sub-Scale (n=118) ^ 





Corrected 


Alpha if 




Item-Total 


Item 


Item 


Correlation 


Deleted 


Q4 


0.2759 


0.8475 


Q5R 


0.1655 


0.8521 


Q6 


0.2630 


0.8563 


Q7 


0.4715 


0.8386 


Q8 


0.5702 


0.8380 


Q9 


0.2627 


0.8561 


QIO 


0.6993 


0.8347 


Qll 


0.6759 


0.8343 



J. 1 

"R" next to the question number indicates that the question was reverse coded for analysis purposes. 



117 



Table 6-8. Item-Total Statistics for Service Time Perceptio ns (n=196) 





Corrected Item- 


Alpha if Item 


Item 


Total Correlation 


Deleted 


Q24 


0.7508 


0.7659 


Q25R 


0.6540 


0.7913 


Q26R 


0.6547 


0.8104 


027 

1 '—■ 


0.6574 


0.7901 



"R" next to the question number indicates that the question was reverse coded for analysis purposes. 



Table 6-9. Final Sub-Scale Reliabilities 



Item 



SERVPERF (PSQ) 
Time Perceptions 
Behavioral Intention 



N 



145 
196 
191 



Items in 
Scale 



19 
4 
2 



Alpha 



0.9137 
0.8322 
0.8997 



Pretest 
Alpha 



0.8873 
0.6560 
0,6292 



Table 6-10. Descriptive Statistics for Questionnaire Measures 



Meas' 



are 



N Min Max Median Mean 



PSQ 120 19 

OSQ 193 1 

Perceived Service Time 

Q24-Acceptable Time 201 1 

Q25-NO Longer Useful 201 1 

Q26-Ouicker Response 1 97 1 

Q27 -Expected Time 200 1 

Behavioral Intention 

Q23-Use in Future 199 1 

Q34-Recommend to 197 1 

Colleague 



Std. 
Dev. 



93 



7 
7 
7 
7 



6 

4 



36.5 35.70 11.76 



2 
6 

4 
4 



1.74 



1.96 
5.85 
4.31 
3.58 



1.46 
1.38 



0.73 



1.12 
1.36 
1.77 
1.34 



0.66 
0.55 



Std. 
Error 



1.07 



0.05 



0.08 
0.10 
0.13 
0.09 



0.05 
0.04 



118 



Table 6-11. Descriptive Statistics for PSQ Items (n=120) ' 

Std. 
Measure Min Max Median Mean Dev. 



Q4-Necessary Resources 
Q6-Written Materials 
Q7 -Speak Clearly 
Q8-Promised Time 
Q9-Sympathetic & Reassuring 
Ql 0-Dependable 
Oil -Provides in Time 
QI2R-Individual Attention 
Q13R-When Performed 
Ql 4R-Prompt Service 
015R-Willingness to Help 
Q16R-TooBusy 
Ql 7 -Trust Employees 
Q18R-Personal Attention 
Q19-Polite Employees 
Q20-Safe Interactions 
Q2IR-Know Needs 
Q22R-Best Interests 
Q23R-Operating Hours 



4 
7 
6 
5 
4 
6 
6 
6 
6 
7 
6 
6 
7 
6 
7 
7 
6 
6 



Std. 
Error 



1 


1.60 


0.74 


0.07 


2 


2.00 


1.00 


0.09 


2 


2.02 


1.14 


0.10 


2 


1.84 


0.77 


0.07 


2 


2.28 


1.05 


0.10 


2 


1.70 


0.75 


0.07 


2 


1.87 


0.83 


0.08 


2 


1.79 


1.04 


0.09 


2 


2.16 


1.24 


0.11 


2 


1.83 


1.02 


0.09 


1 


1.63 


0.89 


0.08 


2 


1.96 


1.13 


0.10 


2 


1.84 


1.06 


0.10 


2 


1.71 


1.00 


0.09 


2 


1.65 


0.85 


0.08 


2 


1.77 


0.91 


0.08 


2 


2.06 


1.09 


0.10 


2 


1.71 


0.89 


0.08 


2 


2.29 


1.21 


0.11 



"R" next to the question number indicates that the question was reverse coded for analyst 



s purposes. 



Results of Hypothesis Tests 



This section reports the results of the seven general hypotheses tested in this study. 
First, hypotheses one through three (HI, H2, and H3) were used to test the relationships 
among PSQ, OSQ, and behavioral intention. The results from the data obtained in this 
study are reported and then compared with the results obtained by Cronin and Taylor 
(1992) to establish construct validity. Next, hypothesis four (H4) tests the relationship 
between PSQ and perceived service time. Following this, hypotheses five and six (H5 and 
H6) test the relationships between actual service time and PSQ and between service delays 
and PSQ. Last, hypothesis seven (H7) tests the relationships among actual service time, 
service delays, and perceived service time. Correlations between all of the variables used 
in the hypothesis testing are presented in Table 6-12 below. 






119 



IBM 

I 

T3 
3 
■** 

§ 

4) 



B 
o 



t 

u 
o 
U 

« 

s. 

H 



I 















o 






i 












o 

o 
o 



\D * 

o 



Ol 

ol 



f-. 



t>3 



O 



1 


1 




1 




o 
















o 
















o 
















w^ 






, 


1 




o 




o 


* 










o 




'S- 


* 










o 




>n 


* 














o 






, 


o 




m 


* 


oo 


* 






o 




m 


* 


y~^ 


* 






o 




>n 


* 


>n 


-X- 






^ 




d 




d 






o 


^H 


* 


in 


■K- 


Cn 


* 




o 


^ 


* 


o 


* 


(N 


* 




o 


^ 


■K- 


vc 


* 


^ 


* 




^H 


o 




d 




d 






00 * 


r~ 


* 


^ 


-X- 


^ 


* 




■* * 


o 


* 


o\ 


-X- 


o 


* 




in * 


^ 


* 


m 


* 


-* 


* 




o 


o 




d 




d 






cs * 


>/-, 


-X- 


o 


* 


'n 


* 




in * 


tN 


* 


(N 


-X- 


r- 


* 




^ -X- 


I/-1 


-X- 


■n 


* 


■* 


* 




o 


O 




d 

50 




d 








1 

o 




1 




jj 






K 






^ 




S; 




o 


tu 




Di 




K 




'c: 


















1^ 






o 














O) 




^ 




g 


^ 


1 




^ 




tl 




Q 


> 


i-r^ 




<3 




t\ 






f^) 


<^) 




<^» 




1"^ 




Ol 


O) 




O) 




Ol 







o 

o 
o 



o 






o 

00 



o 
o 
o 



fN * 

ro * 
00 * 



^ * 
0\ -x- 



* —I 

-X- o 

tN 



ON * in * 
oo * •* * 

d d 



-X- ^ -X- 

rl * 

d 



-X- (N * 
-X- .-H * 

-X- \o * 



fN * 

00 -X- 



t, 



IVi 
Ol 



a: ^ 
Ol 



o 
o 



o 



00 

o 



00 

o 



On 

O 



00 
O 



(N 

NO 

o 



in * 
m * 

(N 



c^ 



o 
d 



00 
inj 

O 

d 



(N 

in 

o 



in * 



On 



f~. 






o 









?" 



CS NJ 






cl 


_o 






ti 


05 


V 


^ 


J^ 




-X- 


& 


* 


50 



s 




cS 


_^^ 


V 


P=: 



120 



HI: There is a positive relationship between perceived service quality (PSQ) and 
overall service quality (OSQ). 



The correlation between PSQ and OSQ was 0.633 (p<0.001) (Table 6-12), 
indicating that evaluations of perceived service quality tend to correspond with evaluations 
of overall service quality. This relationship is both strong and positive, confirming the 
relationship described in HI . This correlation was also consistent with the correlation 
reported by Cronin and Taylor (1992), who reported a correlation between SERVPERF 
and overall quality of 0.601. 

H2a: Intention to use the service in the future is positively associated with evaluations 
of perceived service quality (PSQ). 

H2b: Intention to recommend the service to a colleague is positively associated with 
evaluations of perceived service quality (PSQ). 

The correlation between item 33 "Use in Future" and PSQ was 0.449 (p<0.001), 
and the correlation between item 34 "Recommend to Colleague" and PSQ was 0.482 
(p<0.001) (Table 6-12). Interestingly, the correlations observed in this study were 
stronger than those observed by Cronin and Taylor (1992), who reported a correlation 
between SERVPERF and purchase intention of only 0.365. 

Further analysis was conducted by categorizing the individual responses into two 
levels, "Strongly Agree" and "All other responses". T-tests revealed statistically 
significant differences in PSQ level means for both questions (p<0.001). For item 33 
"Use in Future", the mean difference in PSQ between "Strongly Agree" and "All other 
responses" was 1 1.75 points. For item 34 "Recommend to Colleague", the mean 
difference in PSQ between "Strongly Agree" and all other responses was 11.5 points. 
Therefore, the relationship between PSQ and behavioral Intention is also practically 
significant. Thus, hypotheses H2a and H2b are supported. 



121 



Table 6-13. 


PSQby 


Level of Behavioral Intention 








Mean 


Std. 


95% C.I. 


for Mean 


Lower 


Upper 


Measure 


N 


PSQ 


Dev. 


Bound 


Bound 


033-Use in Future 












Strongly Agree 


76 


31.39 


8.94 


29.35 


33.44 


All Other Responses 


44 


43.14 


12.40 


39.37 


46.91 


034-Recommend to 












Colleague 












Strongly Agree 


80 


31.99 


9.61 


29.85 


34.13 


All Other Responses 


39 


43.49 


12.19 


39.54 


47.44 



H3a: Intention to use the service in the future is positively associated with evaluations 
of overall service quality (OSQ). 

H3b: Intention to recommend the service to a colleague is positively associated with 
evaluations of overall service quality (OSQ). 



The correlation between item 33 "Use in Future" and OSQ was 0,471 (p<0.001), 
and the correlation between item 34 "Recommend to Colleague" and OSQ was 0.612 
(p<0.001) (Table 6-12). These correlations were consistent with the correlation between 
OSQ and purchase intention reported by Cronin and Taylor (1992), who reported a 
correlation between overall quality and purchase intention of 0.527. It is interesting to 
note that the relationship regarding fixture use was not as strong as the relationship 
regarding fijture recommendation, perhaps suggesting that perceived service quality is 
more important to intended fiiture recommendation behavior than intended fixture use. 

As with H2a and H2b, additional analysis was conducted by categorizing the 
individual responses into two levels, "Strongly Agree" and "All other responses". T-tests 
revealed statistically significant differences in OSQ between the response levels for both 
items 33 and 34 (p<0.001). The mean diflFerence between "Strongly Agree" and all other 
responses in OSQ for item 33 was 0.80 points. The mean difference between "Strongly 
Agree" and all other responses in OSQ for item 34 was 0.96 points. Therefore, the 



122 



relationship between PSQ and behavioral intention is also practically significant. As such, 
hypotheses H3a and H3b are accepted. 



Table 6-14. 


OSQby 


Level of Behavioral Intention 








Mean 


Std. 


95% C.I. for Mean 


Lower 


Upper 


Measure 


N 


OSQ 


Dev. 


Bound 


Bound 


033-Use in Future 












Strongly Agree 


114 


1.41 


0.55 


1.31 


1.51 


All Other Responses 


78 


2.21 


0.71 


2.05 


2.37 


034-Recommend to 












Colleague 












Strongly Agree 


123 


1.40 


0.52 


1.30 


1.49 


All Other Responses 


67 


2,36 


0.64 


2.20 


2.52 



H4a: Acceptability of the response time of the service is positively associated with 
evaluations of perceived service quality (PSQ). 

H4b: Perceived usefulness of the information once the response was received is 
positively associated with evaluations of perceived service quality (PSQ). 

H4c: Perceived quickness of response is positively associated with evaluations of 
perceived service quality (PSQ). 

H4d: Deviations from expected response times are positively associated with 
evaluations of perceived service quality (PSQ). 



The correlations between PSQ and the measures of perceived service time were all 
statistically significant (Table 6-12). First, the correlation between PSQ and item 24 
"Acceptable Time" was 0.652, indicating that as the acceptability with the service time 
improved, so did perceived service quality. Second, the correlation between PSQ and 
item 25 "No Longer Useful" was 0.525, indicating that respondents who felt that the 
information was less useful because of prolonged response times also had lower 



123 

perceptions of service quality. Third, the correlation between PSQ and item 26 "Quicker 
Response" was 0.520, indicating that the desire for quicker response times was 
significantly related to evaluations of perceived service quality. Fourth, the correlation 
between PSQ and item 27 "Expected Time" was 0.475, indicating that longer than 
expected response times were associated with lower PSQ scores. 



Table 6- 











95% C.I. for Mean 








Mean 


Std. 


Lower 


Upper 


Scheffe' 


Measure 


N 


PSQ 


Dev. 


Bound 


Bound 


Differences 


024-Acceptable Time 














1) Strongly Agree 


44 


27.82 


6.74 


25.77 


29.87 


2,3,4 


2) Agree 


59 


37.56 


9.14 


35.18 


39.94 


1,4 


3) Somewhat Agree 


9 


44.33 


6.30 


39.49 


49.18 


1 


4) Neutral/Disagree 


8 


55.63 


19.11 


39.65 


71.60 
ANOVA 


1,2 
F=27.1,p<0.001 


025-No Longer Useful 














1) Agree/Neutral 


19 


47.84 


18.75 


38.80 


56.88 


3,4 


2) Somewhat Disagree 


7 


40.29 


9.60 


31.40 


49.17 


4 


3) Disagree 


55 


36.47 


47.84 


18.75 


38.80 


1,4 


4) Strongly Disagree 


39 


27.87 


40.29 


9,60 


31.40 
ANOVA 


1,2,3 
F=18,7, pO.OOl 


026-Ouicker Response 














1) Agree/Neutral 


69 


40.00 


12.57 


36.98 


43.02 


3,4 


2) Somewhat Disagree 


8 


32.00 


7.45 


25.78 


38.22 


None 


3) Disagree 


30 


32.10 


7.01 


29.48 


34.72 


1 


4) Strongly Disagree 


13 


23.46 


3.80 


21.17 


25.76 
ANOVA 


1 

F=11.3, p<0.001 


027 -Expected Time 














1) Shorter than Expected 


44 


29.68 


9.64 


26.75 


32.61 


2,3 


2) Equal to Expected 


50 


36.26 


9.02 


33.70 


38.82 


1,3 


3) Longer than Expected 


26 


44.81 


13.72 


39.27 


50.35 
ANOVA 


1,2 
F=17.4,p<0.001 



In terms of practical differences, the ANOVA procedures revealed significant 
overall differences among the levels for all variables tested (all p<0.001) (Table 6-15). 
The post hoc analyses revealed statistically significant differences for all variables between 
the first and last levels of the analyses (all p<0.001). However, item 24 "Acceptable 
Time" seemed to have the best discrimination of all the items used. There was a large 



124 

difference observed between the PSQ means of the first level "Strongly Agree" and the 
last level "Neutral/Disagree" (27.81 points). More interesting, however, is that the 
practically significant difference that emerged even between levels one and two. The mean 
difference between "Strongly Agree" and "Agree" for item 24 "Acceptable Time" was 
9.74 points, which is almost twice the threshold set for practical significance. 

Overall, these results indicate that a strong relationship exists between the 
perceived acceptability of the response time and perceived service quality. Based on these 
resuhs, all four hypotheses relating to perceived service time (H4a through H4b) were 
accepted. 



H5a: There is a significant inverse relationship between evaluations of perceived 
service quality (PSQ) and actual service time. 

H5b: There is a non-linear relationship between actual service time and perceived 
service quality (PSQ). 



The linear regression did not reveal a statistically significant relationship between 
actual service time and PSQ (R=0.179, Adjusted R^=0.022, (30=34.443, (51=0.0013). 
Although the regression analysis was non-significant (p=0.07), the direction and strength 
of the relationship is worth additional comment. An examination of Figure 6-2 does show 
a definite, if slight, upward linear trend in predicted PSQ, indicating that as actual service 
time (i.e., the observed response time) increased the PSQ score also tended to increase 
(i.e., perceived service quality decreased). The prediction equation of the origmal model 
was PSQ = 34.443 + 0.0013 * (Actual Service Time). Therefore, on average, it took 
roughly 1,300 minutes before an increase in PSQ of one point, or roughly one full day. 
Furthermore, 4 to 5 days passed before PSQ increased by five points. These results 
suggest that actual service times have negligible effect on PSQ. Although there may be a 
slight relationship between actual service time and PSQ as suggested by hypothesis H5a , 



125 



this relationship is not practically significant. Therefore, hypothesis H5a is rejected on 
both a practical and statistical basis. 

Furthermore, an examination of the residual plots in Figures 6-3 and 6-4 did not 
reveal any distinct patterns suggesting that a non-linear transformation would explain the 
relationship between actual service time and PSQ better than simple linear regression. 
Figure 6-3 shows the residuals for actual service times on a scale of to 9000 minutes. 
Since a large number of the residuals are clustered in the one day range from to 500 
minutes, Figure 6-4 was also produced. As mentioned, neither of these figures revealed a 
recognizable pattern that could be reduced through non-linear regression methods. 
Therefore, H5b was also rejected. 

H6: Delays in service are negatively related to lower evaluations of perceived service 
quality (PSQ) 

There was a significant correlation between delays in service and PSQ (r=0.235, 

p=0.01) (Table 6-12). Delayed responses had a mean PSQ of 34.37 (s=10.55), whereas, 

responses that were not delayed had a mean PSQ of 41.50 (s=15.22) (Table 6-16). This is 

a practically significant mean difference of 7. 13 points. Therefore, the hypothesis is 

accepted that delays in service are related to lower evaluations of perceived service 

quality. 



Table 6-16. 


Perceived Service 


Quality by Delay in Service 


Delay in 
Service 


N 


Median 


Mean 
PSQ 


Std. ~ 
Dev. 


95% C.I. for Mean 


Lower Upper 
Bound Bound 


No 
Yes 


97 
22 


35 
40 


34.37 
41.50 


10.55 
15.22 


32.25 36.50 
34.75 48.25 



126 




2000 



3000 4000 5000 

Actual Service Time 



PSQ 
- Predicted PSQ 



Figure 6-2. Regression Equation Plot of PSQ and Predicted PSQ by Actual Service 
Time where PSQ-34.443+0.0013*(Actual Service Time) 



70 
60 
50 
40 



tQ 30 

ns 

3 
S 20 

"^ 10 if- 



-10 tJ 



l\ 



^ ' ' — ; 

(r** 100? .2000 3000 
"•* * *t * 



-20 - 



-30 



-I- 



400Cf 



5000 



«000 



7000 



8000 



9000 



Actual Service Time (in Minutes) 



Figure 6-3. Residual Plot of PSQ on Actual Service Time 



127 



V) 

§ 20 

W 

or ^° 



♦ • ♦ 
« « 



50 * 100 



300« 350 



Actual Service Time (In Minutes) 



Figure 6-4. Residual Plot of PSQ 
on Actual Service Times Occurring within One Day 



H7a: There is a positive relationship between actual service time and perceived service 
time. 

H7b: There is a positive relationship between service delays and perceived service time. 



These two hypotheses were used to evaluate the relationship among measures of 
the actual service time, service delay and perceived service time. Only two correlations of 
the eight analyzed were significant (Table 6-12). First, actual service time versus item 26 
"Quicker Response" had a correlation of 0.164 (p=0.033). Second, actual service time 
versus item 27 "Expected Time" had a correlation of 0. 1 53 (p=0.045). Surprisingly, there 
was no relationship detected between the perceived service time measures and delays in 
service. Furthermore, since the two significant correlations were small and very little of 



128 

the associated variance between actual service time and perceived service time was 
explained, it was difficult to make any practical determination of significance. 

It is interesting to note, however, that there does appear to be at least a trend in 
the data concerning actual service time. Table 6-17 shows the percentage of samples 
above the mean actual service time for the response categories corresponding to item 27 
"Expected Time". Out of the 58 responses indicating that the response time was shorter 
than expected, only 16 (27.6%) had actual service times higher than the mean. However, 
of the 37 responses indicating that the response time was longer than expected, 19 
(5 1 .4%) had actual service times higher than the mean. A j^ analysis run on the cell 
counts only resulted in a borderline value of 5.64 (p=0.06). Therefore, although there is 
some evidence suggesting that a relationship may exist between actual service time and 
perceived service time, this evidence is not statistically conclusive. Therefore, hypotheses 
H7a and H7b are rejected. 

Table 6-17. Percentage Below/Above the Mean Actual Service Time 



by Q27-Expected Time 




Q27-Expected Time 


N 


Percentage 
Below Mean 


Percentage 
Above Mean 


Shorter than Expected 
Equal to Expected 
Longer than Expected 


58 
78 

37 


72.4% 
64.4% 
48.6% 


27.6% 
35.6% 
51.4% 



Part Four: Simulation Results 

Construction of Simulation 

The simulation was constructed in GPSS/H professional and run on an MS-DOS 
format computer. The data needed to construct the simulation were obtained from the 



129 



analyses of the historical and concurrent data as well as the co-director and student 
interviews described earlier in this chapter. Empirical interarrival distributions were 
obtained from the historical data and separated by time of day. Empirical service time 
distributions for the three general categories of questions, as well as the times to take calls 
and return answers were obtained from the concurrent data. The priority discipline for 
"stat", "today", "date", and "no rush" questions was buih into the model as described by 
the students and co-directors during the personal interviews. The distributions of the 
desired response times by time of day were obtained from the historical data. 

In order to assure statistical independence, separate pseudo-random number 
streams (20 altogether) were used for each interarrival and service time function, as well 
as for statistical transfers and selection variables. Furthermore, different random number 
seeds were used for each rephcation of the simulation. 

Four primary assumptions were made during the construction of the simulation. 
First, all students have the same service time distributions. While there certainly are 
differences in students with regard to their individual service times, it is difficult to predict 
each month how well the individual students are going to perform. It was decided to 
assume that all students are equal since decisions must be made in the DIPRC regardless 
of random differences in student productivity from month to month. 

Second, no arrivals occur before 9:00 a.m. or after 5:00 p.m. This assumption 
does not violate reality with any great significance since the telephones are not turned on 
until 9:00 a.m. in the morning and are turned off consistently at 5:00 p.m. Questions are 
sometimes left on the voice mail during non-working hours; however, it was determined 
from the interviews that this is a rare occurrence. 

Third, all arrivals occur by telephone and all responses are returned by telephone. 
Although the large majority of arrivals do occur by telephone, sometimes a question will 
arrive by fax, electronic mail, or personal visit. This assumption makes little difference in 
this setting at the current question volume since the telephone capacity of the DIRPC is 



130 



extremely large in relation to the volume of calls. Furthermore, the service time 
distributions do not preclude that a written response was delivered in addition to a verbal 
response. If the volume of calls were to increase to a point where balking consistently 
occurs, it may be necessary make changes in the simulation distinguishing these arrival 
sources. In the current simulation, however, it was assumed that if a simulated caller 
encounters a busy or an unanswered telephone, the caller would try back once 
immediately. If the caller then encountered a busy signal or unanswered telephone, the 
caller would wait 5-10 minutes (uniformly distributed) and then try again. If the caller 
encountered a busy or unanswered telephone a third time, the caller balks. This 
assumption was based on discussions with the co-directors regarding the behavior they 
would expect of callers encountering this situation in the real system. 

Fourth, questions requesting "stat" attention have the highest priority, followed in 
priority by "today", "date", and "no rush" questions, with "no rush" questions having the 
lowest priority. Questions are answered and returned based on this priority system. 
Furthermore, questions with higher priority preempt questions of lower priority. 
Questions within the same priority class are served on a first-come-first-served basis. This 
is the priority system described by the co-directors and students during the personal 
interviews. 

Verification and Validation of the Simulation 

Interviews were conducted with each of the co-directors of the center in order to 
explain the assumptions made in the simulation and establish the face vaUdity of the 
simulation program. In these interviews, the researcher systematically described the 
simulation program to each of the co-directors using the block diagrams (Appendix Q). 
Both co-directors agreed with that the simulation accurately described the work process 
of the DIPRC, and only minor changes were suggested. 



131 



Four phases of pilot runs were used to verify the behavior of the simulation and 
validate the predictive ability of the simulation. The first phase of simulations was used to 
trace the simulation for errors. This was conducted using the interactive debugger feature 
provided by GPSS/H and the block count output. The block counts provide information 
regarding how many transactions pass through a particular line of code during a simulation 
run. In this phase, it was verified that generated transactions were being routed through 
the simulation as expected. Also, all of the block counts summed as expected (e.g., if 100 
transactions entered the system, the presence or termination of all 100 were accounted). 

The second phase of simulations was used to verify that the simulation behaved as 
expected when interarrival times increase or decrease. Given the underlying foundation of 
queuing theory, as interarrival times decrease (i.e., arrivals become more frequent) the 
student utilization percentage, the total service time, and the expected number of 
transactions in the system should increase asymptotically. Likewise, as the interarrival 
times increase (i.e., arrivals become less frequent), utilization, total service time, and 
expected number in the system should decrease. 

To verify that the simulated system exhibited this hypothesized behavior, 20 
system days (i.e., one month) were simulated in ten separate runs of one replication each 
(where each replication included an antithetic pair of runs). A modifier was used in the 
GENERATE statements of the GPSS/H code (see Appendix R, lines 151 to 171) for each 
of the ten runs to increase or decrease the interarrival rate by a given percentage. The 
system to be verified (i.e., the simulation reflecting the observed data) had a modifier of 
one (i.e., &AM = 1). The other nine runs used respective modifiers of 0.20, 0.40, 0.60, 
0.80, 1.20, 1.40, 1.60, 1.80, and 2.00 (Table 6-18). This corresponded to simulated 
increases in interarrival times of 20% to 100% (i.e., fewer arrivals), and decreases in 
interarrival fimes of 20% to 80% (i.e., more arrivals). 

The simulated system behaved as expected. As the arrival modifier increased from 
0.2 to 2.0, the utilization percentage decreased from 0.999 (i.e., fially utilized) to 0.460 



132 



(i.e., less than half utilized). In addition, the total service time in minutes and expected 
number in the system decreased from 3519 to 107 minutes and 499.5 to 1.7 questions, 
respectively. 

The third phase of pilot runs was used to establish an appropriate warm-up period 
for the simulation, to verify that the simulation produced the expected number of arrivals 
and delays, and to validate the total service time distribution. To obtain data for these 
comparisons, six replications of 20 simulated days each were run (i.e., six independent 
simulated months). 

Since the simulation begins with zero transactions in the system, it is important to 
detect the amount of time necessary for the simulation to warm up (i.e., achieve steady 
state). The graphical method suggested by Hoover and Perry (1989) was used to select 
the warm up period used in the simulation. The simulation program was programmed to 
verify and output the total number of questions in the system at a random point once per 
simulated hour. Each 20-day run resulted in 160 antithetic pairs (960 total samples). The 
twelve samples corresponding to each hour were averaged to obtain an expected number 
in the system for that particular hour across all replications. At each hourly change, the 
result was averaged with the previous hourly results to obtain the expectation function 
presented in Figure 6-5 (i.e., where E(Number in System)=((Ni+. . .+NO/i), where i = 1 to 
n). As Figure 6-5 illustrates, the appropriate warm up period for the simulation appears to 
be just under five days, indicating that the system more or less reaches steady state during 
the fourth simulated day. To allow for some margin of error, five days (i.e., 2400 
simulated minutes) was selected as the warm up period. 

The simulation compared favorably against the historical data on three variables of 
interest: (1) total question arrivals per month, (2) average question arrivals per month, and 
(3) service delay percentage. A significant difference in the average number of arrivals per 
day and per month between the historical and simulated systems would indicate that the 
interarrival distributions input into the model were not behaving as expected. As 



133 



mentioned earlier, the historical data for the past five years revealed an average total 
number of arrivals each month of 266.0 (s=25.26) and an average daily number of 
questions of 12.8 (s=l .25) (Table 6-19). The simulation produced resuhs very close to 
these with an average total number of questions per month of 259.4 (s= 14.93) and 
average daily number of questions of 13.08 (s=0.75). The small differences observed were 
not statistically significant with the mean difference in total monthly arrivals having a T 
statistic equal to -1.24 (p=0.23) and the mean difference in average daily arrivals having a 
T statistic equal to 0.82 (p=0,42). 

Since delays in service appeared to have a significant relationship to perceived 
service quality, it was important to assess whether the simulation could adequately predict 
the percentage of service delays. The results indicated that the percentage of service 
delays for both the simulated and historical data were the same at 16% (T = -0.39, p = 
0.71, where the simulation s=0.03 and historical s=0.37) (Table 6-19). 

In addition to insignificant differences in the mean values, the confidence intervals 
for the three comparisons are also very consistent, having similar lower and upper bounds. 
These results indicate that the simulation program models both the number of question 
arrivals into the system and the percentage of service delays with good precision. 

The data from this third phase of simulation runs was also used to compare the 
concurrent and simulated total service time distributions. The simulation was programmed 
to produce a table of total service times in 15-minute intervals. Likewise, the concurrent 
data was also sorted and counted into in 1 5 -minute time intervals. The concurrent and 
simulated distributions were compared for total service times less than 480 minutes (i.e., 
completed within one day). This cutoff was selected because of the distortion between 
real time and simulated time after 480 minutes. For example, day two of the simulation 
starts at 481 minutes on the simulation clock, however, day two in real time starts at 1,441 



minutes. i 



134 



It was not surprising that the chi-square goodness of fit test between the 
concurrent and simulated total service time distributions resulted in a rejection of the 
hypothesis that the two distributions are equal (d.f =3 1, x^=585.96, p<0.005). Although 
Figure 6-6 graphically illustrates a reasonably good fit between the two probability density 
fiinctions (PDF), there are several spikes in the concurrent data (i.e., at 135, 180, 255, 
285, and 300 minutes) which create large differences in the cell percentages. 
Compounding this problem is a large simulated sample size (n=2814) which tended to 
magnify these cell differences. 

Simple linear regression of the cell percentages was used instead to obtain a better 
feel for how well the simulated total service times predict the distribution of total service 
times observed in the concurrent data. The regression yielded an adjusted R^ of 0.86, 
indicating that the simulated density function explains approximately 86% of the variance 
in the concurrent density fianction. Ideally, the coefficients for the intercept ((30) and the 
simulated service times (pi) would be 0.00 and 1.00, respectively, indicating a perfect fit. 
The realized coefficients were close, with an intercept coefficient if 0.002 and a service 
time coefficient of 0.93. Interestingly, the cumulative density fianction (CDF) produced an 
even better fit, with an adjusted R^ of 0.996, an intercept coefficient of -0.042, and a 
service time coefficient of 1 .07. This improvement can largely be attributed to the 
smoothing of the spikes in the densities mentioned earlier. Plots of the CDFs are 
presented in Figure 6-7. 

The fourth phase of pilot runs was used to validate the model under restricted 
conditions where an analytical solution based on queuing theory could be compared to the 
output generated by the simulation. The simulation was modified so that it closely 
resembled a M/M/3 queuing system. In order to accomplish this, five changes were made 
to the model. First, the empirical interarrival distributions by time of day were replaced 
with one exponential distribution describing the expected interarrival time for the entire 
day. Second, the service time distributions for the separate question categories were 



135 



replaced with one exponentially distributed function describing the overall expected 
service times. Third, the times to take a call and return an answer were set equal to zero 
so that multiple phases would not distort the results. Fourth, the priorities for all 
transactions were made equal (i.e., "stat", "today", "date", and "no rush" questions were 
all treated equally). Fifth, all instances of preemption were replaced with a first-come 
first-served priority status. The results of the simulated runs were then compared to the 
analytical results. 

Twelve runs of 20 simulated days each were generated using this modified 
simulafion program. The program was also coded so that it included the five-day warm up 
period assessed earlier. The queue statistics at the end of 20 days were compiled, 
resulting in sample of 24 data points (i.e., twelve antithetic pairs). 

There was no statistically significant evidence to suggest that the simulated model 
differed from the exact analytical solution using the formulas for a M/M/s queuing system. 
The values for p, Lq, L, Wq, and W obtained from the restricted simulation and closed 
form methods were all comparable, and none of the p-values for the computed T-statistics 
were even close to statistical significance (Table 6-20). Since the restricted model 
performed as expected when compared to a known closed form method, this suggested 
that the unrestricted model should also behave as expected if an analytical model existed 
for comparison. 

The results from the above verification and validation methods indicated that the 
simulation was a credible representation of the actual system. While it is not expected that 
the model will perfectly emulate the real system, the model does seem to perform with a 
reasonably close fit that is sufficient for decision making. Therefore, it was concluded that 
this model was valid for use as a decision making tool in the DIPRC. 



136 



Table 6-18. Student Utilization, Total Service Time, and Expected Number in 
System by Arrival Modifier at 20 Simulated Days ^ 



Arrival 


Student 


Total 


Expected 


Modifier 


Utilization 


Service Time 


Number in 


(&AM) 


Percentage 


(min.) 


System 


0.2 


0.999 


3519 


499.5 


0.4 


0.996 


2454 


173.0 


0.6 


0.984 


1329 


61.5 


0.8 


0.967 


682 


25.0 


1.0 


0.758 


221 


6.4 


1.2 


0.698 


162 


3.8 


1.4 


0.564 


143 


2.8 


1.6 


0.510 


110 


1.9 


1.8 


0.468 


108 


1.8 


2.0 

t , 


0.460 


107 


1.7 




cy Z r-* tr 



Figure 6-5. 



Hour of Day 
Expected Number in 



-^ V V <? ^s' -^ -'c? '^ 



<X <X <X <1 o, <x <x 

'o> V -cT -^ '^ 'fe 'i' ^ 



■o 



System for Six Replications of 20 Days 



137 



Table 6-19. Descriptive Statistics for Selected Comparisons 
Between Observed and Simulated Data* 





N 


Mean 


Std. 
Deviation 


95% C.I. 


for Mean 


Lower 
Bound 


Upper 
Bound 


Observed: 












Total Questions 
Daily Questions 
Delayed Percent 


2395 

66 

2395 


266.0 
12.8 
0.16 


25.26 
1.25 
0.37 


259.8 
12.4 
0.15 


272.2 
13.1 
0.18 


Simulated: 












Total Questions 
Daily Questions 
Delayed Percent 


12 
12 

12 


259.4 
13.0 
0.16 


14.93 
0.75 
0.03 


249.9 
12.5 
0.14 


268.9 

13.4 
0.18 



*Note: Confidence Intervals Produced Using T-Statistic 















95% 


C.L for 






Exact 


Simul. 


Sim. Std. 




Simulated Mean 


Lower 


Upper 






Solution 


Mean 


Deviation 


p-value 


Bound 


Bound 


Utilization (p) 




0.78 


0.78 


0.07 


0.82 


0.75 


0.81 


No. Waiting (Lq) 




2.25 


2.39 


1.88 


0.71 


1.60 


3.19 


No. in System (L) 




4.61 


4.74 


2.05 


0.76 


3.87 


5.60 


Queue Wait (Wq) 




80.28 


82.60 


58.90 


0.85 


57.70 


107.50 


Total Service Time 


(W) 


164.03 


164.10 


61.0 


0.99 


138.40 


189.90 



t 

Confidence Intenmls Produced Using T-Siatistic 



138 




^^ % ^O. 



<^ % % % % % % % 



% 



Total Service Times 
(Minutes) 



— Observed PDF 
- Simulated PDF 



Figure 6-6. Observed Versus Simulated Probability Density Functions (PDF) 




T I ' ' i \ r I 1 I I I I — I — I — I — ] — \ — \ — I — I — 1 — I — I — I — I — ] — I — I — 1 — I — I 
15 60 105 150 195 240 285 330 375 420 465 



Total Service Times 
(Minutes) 



Observed CDF 
Simulated CDF 



Figure 6-7. Observed Versus Simulated Cumulative Density Functions (CDF) 



139 
Relationship of Staffing Levels and Service Times to the Percentage of Service Delays 

The validated simulation program was used to test three specific research 
questions. First, the simulation was used to explore how changes in the staffing levels and 
service rates would affect total service times and service delays. Second, the simulation 
data was analyzed to determine the optimal combination of staffing levels and potential 
improvements in service rates and service delays. Third, the optimal solution was tested 
for sensitivity to changes in call arrival rate. 

Rl: How do changes in staffing levels and service rates impact simulated total service 
times and the percentage of service delays in the drug information service? 

Sixty simulated months (30 antithetic pairs) were run for five staffing levels and 
nine levels of service rates. The staffing levels were varied from one to five students. The 
service rate corresponding to the time required to research a question and obtain an 
approval (i.e., research time) was varied by degrees of 5%, 10%, 15%, and 20% above 
and below "normal" operation. "Normal" operation was defined as three students staffing 
the center and with arrival and service rates derived from empirically observed 
distributions. To evaluate this research question, staffing levels and variation of service 
rates were evaluated independently compared to the "normal" system. Tables 6-21, 6-22, 
and 6-23 present the descriptive statistics for the various parameters measured by the 
simulation, including: total service time (W), wait in queue (Wq), number in system (L), 
queue length (Lq), number completed, utilization (p), and delay percentage. 
Effect of Changes in Staffing Levels 

There were sharp decreases in W as the number of simulated students staffing the 
service was increased from one student (3521 minutes, s=331 .5) to two students (1385.0 
minutes, s=437.4), and from two students to three students (239 minutes, s=97.6) (Table 



140 



6-21). A large decrease in W was also observed as the number of students was increased 
from three to four students (122.0 minutes, s=17.7), although the trend was less dramatic. 
Decreases in service times appeared to level off after four students. 

These decreases in W can largely be explained by examining the results for Wq. 
For a staffing level of only one student, 3478.6 minutes (s=339.5) of the total service time 
(98.7%) were spent just waiting in the queue (Table 6-21). In contrast, only 121 (s=86.1) 
minutes (50.6%) of the total service time was spent in the queue when three students 
staffed the center. Wq decreased even more when four and five students staffed the 
center. Thus, as more students are employed, questions enter service quicker and spend 
less time in the queue; however, at the current question volume staffing levels of more 
than five students would have limited benefit on Wq. 

The resuhs for L and Lq were consistent with these results. For one student, Lq 
was 98.1% of L; however, this dropped to 50.5% when three students were employed. 
However, looking at the number of questions completed reveals an obvious plateau after 
three students of approximately 268 (Table 6-21). This plateau occurred because the 
simulated students finished nearly all of the questions entering the system within the given 
month. 

Changes in staffing levels were also evaluated for their effect on simulated delay 
and utilization percentages. When less than three students were employed in the 
simulation, dramatic mcreases in the percentage of service delays occurred. Using only 
one simulated student, 79.8% (s=4.9%) of the questions were delayed past the needed 
time, and with two students 42.8% (s=12.0%) of the questions were delayed (Table 6-21). 
When three students were used, the percentage of delays was substantially lower at 17.0% 
delayed (s=4.3%). However, the gains after three students leveled off considerably. Four 
students did improve the delay percentage by almost 4%o; however, there was not a 
significant improvement beyond the addition of one student (i.e., five or more students). 
The difference in delay percentage between four and five students was only 0.4%). 



141 



Furthermore, when staffed by three students, the average utilization percent (p) 
was 80.9% (Table 6-21). When only one or two students was simulated, p rose to almost 
100% in both cases. This indicated that the simulated center was understaffed when less 
than three students worked. As evidenced by the increases in W, Wq, and delayed 
percentage, one or two students simply cannot manage the current volume of work. 
When four and five students were used, p dropped to 60.9% and 48.8%, respectively. 

These results provide early evidence suggesting that four students may represent a 
worthwhile improvement given the significant improvements in total service time and 
delayed percentage. Also, the 60% to 62% utilization realized when four students were 
used is probably in an acceptable range given the students other activities and 
responsibilities not measured by the simulation. However, the use of five students at this 
question volume is questionable since the addition of the fifth student does not seem to 
offer significant improvements in W or the delayed percentage. 
Effect of Changes in Service Rates 

Under normal simulation conditions, the mean total service time (W) was 
approximately 239 minutes (Table 6-22). The simulated system was evaluated for both 
increases and decreases in research and approval times of 5%, 10%, 15%, and 20%. 
Improvements of 5%, 10%, 15%, and 20% resulted in decreases in expected total service 
times (W) of approximately 45 minutes, 72 minutes, 95 minutes, and 1 13 minutes. 
Similarly, W increased as the times required to research and approve questions increased. 
A 5% increase in research time resuhed in an increase in W from approximately 239. 1 
minutes (s=97.6) to 282.5 minutes (s=128.4), and a 10% increase resulted in an increase in 
W to 348. 1 (s=169.7) minutes. In addition, a 15% increase in research and approval times 
resulted in an increase in W to 423.0 minutes (s=169.7), and a 20% increase resulted in an 
increase in W to 507.45 minutes (s=250.53). 



142 



The behavior of L was consistent with the results for W described above. Under 
normal conditions the expected number in the simulated system was 6.89 questions (Table 
6-22). Increases of 5% and 1 0% in research and approval times resulted in increases in L 
to approximately 8.2 (s=4.2) and 10.1 (s=5.6) questions, respectively. A 15% increase in 
research and approval times increased L to about 12.4 (s=7.0), and a 20% increase in 
research and approval times increased the Lq to about 14.9 (s=8.1). 

As expected, when research and approval times increased significantly, the 
percentage of service delays and the student utilization percentage also increased. A 10% 
increase in research and approval times resuhed in an occurrence of service delays 21.0%) 
(s=6. 1%) of the time, while utilization increased to 87.3%) (s=5.8%). An increase in 
research and approval times of 15% increased the percentage of delays to 23.5%o (s=7.1%) 
and utilization to 89.9% (s=5.3%). A 20% increase in research and approval times 
increased the percentage of delays to 26.7% (s=7.3%) and the utilization percentage to 
92.4% (s=4.7%). 

Decreases in the time to research and approve questions lead to improvements in 
the percentage of service delays and decreases in the level of student utilization. A 10% 
decrease in research and approval times improved the delay percentage from 17.0% 
(s=4.3%) to 13.5% (s=3.1%) and utilization decreased from 80.9% (s=6.2%) to 73.7% 
(s=5.8%)) (Table 6-23). A 15%) decrease in research and approval times improved the 
delay percentage to 12.3% (s-2.8%) but only reduced the utilization to 70.2% (s=5.8%). 
In addition, a 20% decrease in research and approval times improved the delay percentage 
to 1 1.0% (s=2.3%) and reduced utilization .to 66.6% (s=5.6%). 

These results implied that an overall 1 5% reduction in research times would 
decrease the delay percentage by more than the addition of another student, and still make 
more efficient use of the existing personnel. Thus, considering that adding one student 
improved the simulated delay percentage from 17.0% to 13.0%, but reduced utilization to 



143 



60.9%, it appears that improving service rates offers a more efficient alternative when 
possible. 



R2: What combination of changes in staffing levels and. service rates optimizes the 
system for delays in service and total service time? 



The service quality results presented earlier provided evidence indicating that both 
perceived service time (H4a through H4b) and service delays (H6) were related to 
evaluations of perceived service quality. Since perceived service time was necessarily 
measured by subjective means, the relationships observed could not objectively be 
quantified in a way useful for the purposes of simulation. In addition, perceived service 
time was not significantly related to actual service time or service delays (H7a and H7b), 
so the transformation of perceived service time into a useful variable was not possible. 
Therefore, service delay was the only remaining statistically significant link between 
evaluations of service quality and controllable aspects of the system studied. 
Consequently, the expected percentage of service delays was used as the primary variable 
to be optimized in the simulation model. Secondary consideration was given to levels of 
performance resulting in efficient total service times (W). 

Since the addition of students does not result in proportional increases direct costs 
(i.e., hourly wages), cost was an inappropriate mechanism for optimization. However, 
the co-directors described at least five negative consequences of adding additional 
students. First, each additional student requires additional training time, support, and 
supervision, and since new students are brought in and trained each month, this cost does 
not typically diminish over time. Second, lack of space and computer equipment makes the 
use of more than four students uncomfortable. Third, each student adds to the time 
required to complete the group educational activities each week, such as journal club 
presentations. Fourth, each student requires more individual education time in terms of 
guidance and grading. Fifth, increasing the number of students decreases the number of 



144 



questions that the students answer overall during their rotation, perhaps diminishing their 
experience. Thus, the overriding attitude between the co-directors seemed to reflect a 
desire to provide the best possible service without adding unnecessary personnel. 

Therefore, student utilization percentage was chosen as the variable against which 
delays and total service time were optimized. Utilization percentage, as applied from the 
description in chapter one, is the percentage of the total time that the students are busy. 
Therefore, one minus the utilization percentage (i.e., 1-U) is the idle time percentage. 
There is an important tradeoff between idle time and decreases in service delays and total 
service times. As McClain and Thomas (1985) describe, "[wjaiting times can be reduced 
by increasing the relative service capacity. This may be accomplished by reducing the 
arrival rate, or increasing the number of servers or their work pace (service rate). All 
these actions will increase the average idle time of the servers" (p. 562). Therefore, the 
use of utilization percentage as the balancing variable provided a means of establishing 
which changes had the largest effect on service delays while making the most efficient use 
of existing personnel. This method of evaluating efficiency is also consistent with methods 
presented by Thompson (1992) and Westgard and Berry (1986). 

The optimal combination of improvements was evaluated using a ratio analysis 
comparing the effect of simulated improvements in service capacity on delay percentage 
and total service times versus changes in student utilization percentage. As described 
above, potential improvements included the addition of one or two students (i.e., four or 
five total) and/or 5%, 10%, 15%, or 20% improvements in research and approval times. 
Each potential combination of improvements was compared to normal operating 
conditions (i.e., three students and empirically observed service and arrival rates). 

Two values measuring efficiency were calculated. First, a value measuring the 
efficiency of the simulated expected delay percentage versus the simulated expected 
utilization percentage was calculated by dividing the utilization percentage by the delay 
percentage (i.e., U/D). Since the goal was to have lower percentages of delays per 



145 



percentage of utilization, larger values indicated more efficient use of personnel. Second, 
a value measuring the efficiency of the simulated expected total service time versus the 
simulated expected utilization percentage was calculated by dividing the utilization 
percentage by the expected total service time in minutes times 1000 (i.e., UAV x 1000). 
Similarly, larger values for this measure indicated greater efficiency since the goal was to 
achieve lower total service times without substantially decreasing utilization. 

Table 6-24 reports the results of this analysis. A 20% decrease in service rates 
while maintaining a staffing level of three students was the most efficient means of 
reducing delays in service. This level of improvement resulted in efficiency ratio value 
(U/D) equal to 6.04 (U=66.55%, D=l 1 .03%), indicating that each percentage point in 
delay is equal to approximately 6 percentage points in utilization. Even though the 
addition of two students (i.e., five students) tends to minimize the percentage of delays, it 
does not produce optimum results since substantial decreases in utilization (i.e., increases 
in idle time) are also realized. The addition of one student resulted in an overall efficiency 
ratio of 4.69 Table 6-24 indicates that while the addition of a student does significantly 
decrease the percentage of delays, the system would actually be less efficient since the 
overall efficiency ratio under normal conditions was 4.77. 

Although the lowest delay percentages were observed when additional students 
were added it does not appear that the current question volume necessitates additional 
students. This point is illustrated by Figure 6-8, which shows that five students always 
produces fewer delays at a given service rate. However, as service rates improved, the 
advantage of adding additional students decreased. A 20% decrease in service rates 
produced nearly the same benefit using three students (1 1 .03%) as it would using four 
students (9.79%). Furthermore, a 20%) decrease in service rates under four and five 
students produced similar decreases in delays (i.e., difference of only 0.42%); however, 
the utilization percentage was much lower with five students (i.e., 39.99% with five 
students, 50.01% with four students). The results indicate that improving the service rate 



146 



was generally a more important factor in reducing the delay percentage than additional 
staffing. 

Interestingly, however, the most efficient service capacity in terms of total service 
times did require an additional student (i.e., four students). A 20% decrease in service 
rates while maintaining a staffing level of four students produced the most effective means 
of reducing the total service times. This level of improvement resulted in an efficiency 
ratio (UAV) equal to 5.71 (11=50.01%, W=87.6 mmutes), indicating that each minute of 
service time was equal to approximately 0.571 percentage points in utilization. Again, 
although the lowest total service times were realized when five students were simulated, 
these benefits are offset by decreases in utilization. Furthermore, in terms of total service 
time the "normal" service capacity turned out to be the least efficient of the levels 
examined. These results indicated that while service rates are important in efficiently 
managing the total service time, the staffing level was at least equally important. 

Overall, delays in service tended to be more sensitive in with regard to changes in 
the service rates than total service times. By comparing Figures 6-8 and 6-9, it is apparent 
that a staffing level of three students was fairly sensitive to changes in the research and 
approval times with respect to total service time. However, staffing levels of four and five 
students are not sensitive to changes in the service rate. 

Therefore, three students combined with reductions in research and approval times 
of at least 10% achieved the optimum service capacity when considering service delays. 
However, if total service time were to be considered more important than reducing the 
delay percentage then the optimal solution would be to staff the DIPRC with four students 
and reduce research and approval times by at least 10%). 

R3: How sensitive is the optimum solution to random variation in the arrival rate? 

Since queuing systems are often greatly affected by random variation in the arrival 
rate (McClain and Thomas, 1985) the optimization methods used above were recalculated 



147 



under conditions of varying arrival rates. The simulation used a modifier to increase and 
decrease the interarrival rates in 5% increments. Increases and decreases were made in the 
arrival rates until the threshold at which both the efficiency ratios (U/D and UAV) shifted 
from their optimal solutions presented above. As described above, at the current call 
volume, the most effective point for managing delays occurred with three students with an 
improvement in research and approval times of 20%. Furthermore, the most effective 
point for managing total service time occurs with four students with an improvement in 
total service times of 20%. 

Table 6-25 presents the results of the sensitivity analysis. The optimum solutions 
tend to be fairly stable. The threshold for increases in interarrivals occurred at 25%, 
essentially indicating that if the current volume were to increase somewhere between 20% 
to 25%), then an additional student would be warranted. The threshold for decreases in 
interarrivals was slightly more sensitive. A decrease in interarrival times of 15%) to 20% 
was the threshold for the change in the efficiency ratio for managing delays in service 
(U/D). A decrease in interarrival times of 10% to 15% was the threshold for the change in 
the efficiency ratio for managing total service time (UAV). 



148 



Table 6-21, Descriptives for Queue Statistics 
by Number of Students (N=60) 



Number of Students 



Total Service Time (W) 

1 Student 

2 Students 

3 Students 

4 Students 

5 Students 
Wait in Queue (Wq) 

1 Student 

2 Students 

3 Students 

4 Students 

5 Students 
Number in System (L) 

1 Student 

2 Students 

3 Students 

4 Students 

5 Students 
Queue Length (Lq) 

1 Student 

2 Students 

3 Students 

4 Students 

5 Students 
Number Completed 

1 Student 

2 Students 

3 Students 

4 Students 

5 Students 
Utilization Percentage (p) 

1 Student 

2 Students 

3 Students 

4 Students 

5 Students 
Delay Percentage 

1 Student 

2 Students 

3 Students 

4 Students 

5 Students 



Mean 



3521.9 

1385.0 

239.1 

122.0 

99.6 



3477.6 


1250.4 


121.1 


24.5 


7.7 


108.2 


41.3 


6.9 


3.5 


2.8 


106.2 


36.9 


3.5 


0.7 


0.2 



117.1 
220.2 
267.1 
268.1 
268.2 

1.000 
0.994 
0.809 
0.609 
0.488 

0.798 
0.428 
0.170 
0.130 
0.122 



Std. 
Deviation 



331.5 

437.4 

97.6 

17.7 

9,3 



10.0 
14.8 
14.9 
15.6 
15.3 

0.000 
0.001 
0.006 
0.005 
0.004 

0.049 
0.120 
0.043 
0.025 
0.022 



95% C.I. for Mean 



Lower 
Bound 



3436.3 

1272.0 

213.9 

117.4 

97.2 



114.5 
216.3 
263.3 
264.1 
264.3 

1.000 
0.991 
0.793 
0.596 

0.477 

0.785 
0.397 
0.158 
0.123 
0.116 



Upper 
Bound 



3607.6 

1498.0 

264.4 

126.6 

102.0 



339.5 


3389.9 


3565.3 


444.3 


1135.6 


1365.2 


86.1 


98.9 


143.4 


11.5 


21.6 


27.5 


4.0 


6.7 


8.7 


13.6 


104.7 


111.8 


14.9 


37.4 


45.2 


3.2 


6.1 


7.7 


0.6 


3.3 


3.6 


0.3 


2.7 


2.9 


13.8 


102.6 


109.8 


14.8 


33.0 


40.7 


2.7 


2.8 


4.2 


0.4 


0.6 


0.8 


0.1 


0.2 


0.3 



120.0 
224.0 
271.0 
272.1 
272.2 

1.00 

1.00 

0.825 

0.622 

0.499 

0.810 
0.459 
0.181 
0.136 
0.128 



149 



Table 6-22. Descriptive Statistics for the Total Service Time, Time in Queue, 
Number in System, and Queue Length by Percentage Change in Research and 

Approval Time (N=60) 







Std. 


95% C.I. for Mean 


Lower 


Upper 


% Change in Research Time 


Mean 


Deviation 


Bound 


Bound 


Total Service Time (W) 










20% Decrease 


125.9 


24.9 


119.5 


132.4 


15% Decrease 


144.0 


35.6 


134.7 


153.2 


1 0%) Decrease 


167.9 


46.6 


155.8 


179.9 


5% Decrease 


194.4 


63.0 


178.1 


210.6 


Normal 


239.1 


97.6 


213.9 


264.4 


5% Increase 


282.5 


128.4 


249.4 


315.7 


1 0% Increase 


348.1 


169.7 


304.3 


391.9 


1 5% Increase 


423.0 


216.2 


367.1 


478.8 


20% Increase 


507.5 


250.5 


442.7 


572.2 


Time in Queue (Wq) 










20%i Decrease 


39.4 


18.9 


34.5 


44.3 


15% Decrease 


51.6 


28.1 


44.4 


58.9 


1 0% Decrease 


67.3 


36.7 


57.8 


76.8 


5% Decrease 


86.3 


51.9 


72.8 


99.7 


Normal 


121.1 


86.1 


98.9 


143.4 


5% Increase 


157.0 


116.0 


127.1 


187.0 


1 0% Increase 


211.7 


156.4 


171.3 


252.1 


15% Increase 


279.6 


203.6 


227.0 


332.1 


20% Increase 


353.4 


237.0 


292.2 


414.6 


Number in System (L) 










20%) Decrease 


3.6 


0.9 


3.4 


3.8 


15% Decrease 


4.1 


1.2 


3.8 


4.4 


10% Decrease 


4.8 


1.5 


4.4 


5.2 


5%) Decrease 


5.6 


2.1 


5.0 


6.1 


Normal 


6.9 


3.2 


6.1 


7.7 


5% Increase 


8.26 


4.2 


7.1 


9.2 


10% Increase 


10.1 


5.6 


8.7 


11.6 


15% Increase 


12.4 


7.0 


10.6 


14.2 


20% Increase 


14.9 


8.4 


12.8 


17.0 


Queue Length (Lq) 










20%o Decrease 


1.1 


0.6 


1.0 


1.3 


15% Decrease 


1.5 


0.9 


1.2 


1.7 


10%) Decrease 


1.9 


1.1 


1.6 


2.2 


5% Decrease 


2.5 


1.6 


2.1 


2.9 


Normal 


3.5 


2.7 


2.8 


4,2 


5% Increase 


4.5 


3.6 


3.6 


5.5 


10% Increase 


6.1 


5.0 


4.9 


7,4 


1 5%) Increase 


8.1 


6.4 


6.5 


9.8 


20% Increase 


10.3 


7.5 


8.3 


12.2 



150 



Table 6-23. Descriptive Statistics for Number Completed, 
and Delay Percentage by Percent Change in Research and 



Utilization Percentage, 
Approval Time (N= 60) 







Std. 


95% C.I. 


for Mean 


Lower 


Upper 


% Change in Service Time 


Mean 


Deviation 


Bound 


Bound 


Number Completed 










20% Decrease 


268.3 


15.7 


264.2 


272.3 


15% Decrease 


268.0 


15.5 


264.0 


271.9 


10% Decrease 


267.1 


15.5 


263.1 


271.1 


5% Decrease 


267.4 


15.5 


263.3 


271.4 


Normal 


267.1 


14.9 


263.3 


271.0 


5% Increase 


266.3 


14.6 


262.5 


270.1 


1 0% Increase 


264.9 


14.0 


261.3 


268.6 


1 5% Increase 


262.8 


13.5 


259.4 


266.3 


20% Increase 


259.6 


13.3 


256.2 


263.1 


Utilization Percentage (p) 










20% Decrease 


0.666 


0.056 


0.651 


0.680 


15% Decrease 


0.702 


0.058 


0.687 


0.717 


10% Decrease 


0.737 


0.061 


0.722 


0.753 


5%> Decrease 


0.775 


0.062 


0.758 


0.791 


Normal 


0.809 


0.062 


0.793 


0.825 


5% Increase 


0.842 


0.062 


0.826 


0.858 


1 0% Increase 


0.874 


0.058 


0.858 


0.889 


15% Increase 


0.899 


0.053 


0.885 


0.913 


20% Increase 


0.924 


0.047 


0.912 


0.937 


Delay Percentage 










20% Decrease 


0.1103 


0.023 


0.104 


0.116 


15% Decrease 


0.1226 


0.028 


0.115 


0.130 


1 0% Decrease 


0.1352 


0.031 


0.127 


0.143 


5% Decrease 


0.1504 


0.036 


0.141 


0.160 


Normal 


0.1695 


0.043 


0.158 


0.181 


5% Increase 


0.1866 


0.050 


0.174 


0.200 


1 0% Increase 


0.2101 


0.061 


0.194 


0.226 


15% Increase 


0.2352 


0.071 


0.217 


0.254 


20% Increase 


0.2667 


0.073 


0.248 


0.286 



151 



Table 6-24. Effectiveness of Service Capacity Improvements Under 
Normal Arrival Rates (n=60) 





Expected 


Expected 


Expected 








Total Service 


% Delayed 


% Utilization 




U/W 


Level^ 


Time (W)* 


(D) 


(U) 


U/D 


(xlOOO) 


3/Normal 


239.1 


16.95% 


80.90% 


4.77 


3.38 


2/5% 


194.4 


15.04% 


77.46% 


5.15 


3.99 


3/10% 


167.9 


13.52% 


73.72% 


5.45 


4.39 


3/15% 


143.9 


12.26% 


70.16% 


5.72 


4.87 


3/20% 


125.9 


11.03% 


66.55% 


6.04 


5.29 


4/Normal 


122.0 


12.98% 


60.89% 


4.69 


4.99 


4/5% 


112.5 


12.19% 


58.19% 


4.78 


5.17 


4/10% 


103.3 


11.21% 


55.46% 


4.95 


5.37 


4/15% 


95.5 


10.51% 


52.74% 


5.02 


5.52 


4/20% 


87.6 


9.79% 


50.01% 


5.11 


5.71 


5/Nonnal 


99.6 


12.18% 


48.77%, 


4.00 


4.90 


5/5% 


94.2 


11.19% 


46.58% 


4.16 


4.94 


5/10% 


88.2 


10.64% 


44.38% 


4.17 


5.03 


5/15% 


83.0 


9.98% 


42.19% 


4.23 


5.09 


5/20% 


77.4 


9.37% 


39.99% 


4.27 


5.17 



' Number of Students/Percent Decrease in Research and Approval Time 
* In Minutes 



152 




SERVERS 



g .08 

Normal 10% Decrease 20% Decrease 

5% Decrease 15% Decrease 

Service Rate Modifier 

Figure 6-8. 95% Confidence Intervals for Delay Percentage by Service Rate 

Modifier and Number of Servers 



o 



05 









i 


». 


200- 


i 


. 


100' 


T 

3: i 


0, 





SERVERS 

I 

A 3 

I 



Normal 10% Decrease 20% Decrease 

5% Decrease 15% Decrease 



Service Rate IViodifier 

Figure 6-9. 95% Confidence Intervals for Total Service Time (in Minutes) by 
Service Rate Modifier and Number of Servers 



153 



Table 6-25. 


Sensitivity of Optimal Solution to Changes 


in the Arrival Rate (n=6( 









Expected 


Expected 


Expected 








Interarrival 




Total Service 


% Delayed 


Utilization 




UAV 




Modification 


Level^ 


Time (W)* 


(D) 


(U) 


U/D 


(xlOOO) 




25% Increase 


2/20% Decrease 


1664.9 


46.79% 


99.94% 


2.14 


0.60 






3/Normal 


861.9 


30. 76% 


98.10% 


3.19 


1.14 






3/20% Decrease 


270.2 


14.93% 


88.16% 


5.91 


3.26 






4/20% Decrease 


109.7 


10.39% 


66.60% 


6.41 


6.07 




20% Increase 


5/20% Decrease 


85.0 


9.57% 


53.27% 


5.56 


6.26 




2/20% Decrease 


1437.3 


39.31% 


99.43% 


2.53 


0.69 






3/Normal 


610.0 


26.11% 


95.34% 


3.65 


1.56 






3/20% Decrease 


204.5 


13.66% 


82.24% 


6.02 


4.02 






4/20% Decrease 


101.3 


10.31% 


61.86% 


6.00 


6.11 




15% Increase 


5/20% Decrease 


82.7 


9.52% 


49.51% 


5.20 


5.99 




2/20% Decrease 


1180.0 


35.05% 


99.25% 


2.83 


0.84 






3/Normal 


444.4 


21.48% 


92. 70% 


4.31 


2.09 






3/20% Decrease 


163.5 


12.05% 


77.60% 


6.44 


4.75 






4/20% Decrease 


95.9 


10.07% 


58.25% 


5.79 


6.07 




10% Increase 


5/20% Decrease 


80.4 


9.52% 


46.59% 


4.90 


5.80 




2/20% Decrease 


939.3 


29.18% 


98.74% 


3.38 


1.05 






3/Normal 


347.1 


19.65% 


89. 1 7% 


4.54 


2.57 






3/20% Decrease 


150.3 


11.84% 


73.92% 


6.24 


4.92 






4/20% Decrease 


93.8 


9.74% 


55.52% 


5.70 


5.92 




5% Increase 


5/20% Decrease 


80.4 


9.45% 


44.44% 


4.70 


5.53 




2/20% Decrease 


756.3 


26.29% 


96.81% 


3.68 


1.28 






3/Normal 


281.7 


18.32% 


84.94% 


4.64 


3.02 






3/20% Decrease 


134.4 


11.74% 


70.38% 


6.00 


5.24 






4/20% Decrease 


90.2 


10.07% 


52.89% 


5.25 


5.86 




5% Decrease 


5/20% Decrease 


78.5 


9.66% 


42.34% 


4.38 


5.39 




2/20% Decrease 


492.5 


18.92% 


92.99% 


4.91 


1.89 






3/Normal 


205.0 


15.64% 


77. 76% 


4.97 


3.79 






3/20% Decrease 


115.9 


10.70% 


63.67% 


5.95 


5.49 






4/20% Decrease 


85.0 


9.48% 


47.76% 


5.04 


5.62 




10% Decrease 


5/20% Decrease 


76.7 


9.22% 


38.20% 


4.14 


4.98 




2/20% Decrease 


417.9 


17.42% 


90.37% 


5.19 


2.16 






3/Normal 


192.6 


15.31% 


74. 78% 


4.88 


3.88 






3/20% Decrease 


110.5 


10.62% 


61.11% 


5.76 


5.53 






4/20% Decrease 


83.2 


9.65% 


45.75% 


4.74 


5.50 




15% Decrease 


5/20% Decrease 


75.9 


9.39% 


36.59% 


3.90 


4.82 




2/20% Decrease 


327.2 


16.26% 


86.96% 


5.35 


2.66 






3/Normal 


170.6 


14.64% 


71.01% 


4.85 


4.16 






3/20% Decrease 


105.7 


10.48% 


58.17% 


5.55 


5.50 






4/20% Decrease 


81.7 


9.57% 


43.59% 


4.55 


5.34 




20% Decrease 


5/20% Decrease 


75.4 


9.34% 


34.86% 


3.73 


4.62 




2/20% Decrease 


264.1 


14.10% 


83.72% 


5.94 


3.17 






3/Normal 


152.1 


13.98% 


68.60% 


4.91 


4.51 






3/20% Decrease 


98.9 


10.33% 


56.13% 


5.44 


5.67 






4/20% Decrease 


79.8 


9.59% 


42.06% 


4.38 


5.27 






5/20% Decrease 


74.6 


9.46% 


33.64% 


3.56 


4.51 





Number of Students/Percent Decrease in Research and Approval Time * In Minutes 



CHAPTER 7 
DISCUSSION AND CONCLUSIONS 



Overview 



This study had three primary objectives. First, to develop a computer simulation 
for a drug information service and validate the model against the existing system. Second, 
to investigate the associations among actual service time, service delays, and evaluations 
of perceived service quality in a drug information service setting. Third, to recommend 
system improvements based on the simulation model, in particular those improvements 
that maintain quality while reducing delays and response times. To achieve these 
objectives, the data analysis was broken up into four related parts. The first and second 
parts were presented in chapter five as preliminary results, which discussed the results of 
the analysis of the historical data and personal interviews. The third and fourth parts were 
directly related to the hypotheses and specific research questions posed in chapter three, 
and were presented in chapter six as main results. 

This chapter begins by summarizing and discussing the results from chapter six as 
they relate to the hypothesis tests and specific research questions. Where applicable, these 
results are compared with the existing literature. Next, the limitations of the study are 
described. This chapter concludes with a presentation of the study conclusions and 
recommendations for fiiture research. 



154 



155 
Summary and Discussion of Results 

PSQ. QSQ. and Behavioral Intention 

The first three study hypotheses (HI, H2, and H3) tested the relationships among 
PSQ, OSQ, and behavioral intention. A 19-item scale measured perceived service quality, 
where the items were summed to produce a PSQ score. OSQ was a single item measure 
asking callers to rank the overall service on a scale fi"om "Excellent" to "Unacceptable". 
Behavioral intention was a two-item measure reflecting the callers' perceptions regarding 
their intention to use the service again and recommend the service to a colleague. 

This research confirmed the anticipated relationships among these variables. The 
strongest correlation between these variables was observed between PSQ and OSQ (r = 
0.633). However, significant relationships were also observed between PSQ and 
behavioral intention (r = 0.449 and 0.482) and between OSQ and behavioral intention (r = 
0.471 and 0.612). As discussed in chapter six, these relationships were consistent with 
similar correlations reported in Cronin and Taylor (1992). In addition, Headley and Miller 
(1993) reported strong, significant relationships between perceived service quality and 
overall quality with comparable degrees of explained variance. The direction of this 
relationship was also consistent with reports from Boulding et al, (1993), McAlexander et 
al. (1994), Parasuraman (1991), and Zeithaml et al. (1996). This high degree of 
convergence supports the construct validity of these measures. 

Although the correlation between PSQ and OSQ was relatively high, the reality of 
this statistic indicates that PSQ only explains approximately 36% of the variance in OSQ. 
This seemingly low predictive ability might be partially explained if we consider that PSQ 
(as derived from SERVPERF) is largely a measure of fijnctional quality (i.e., process 
related quality) (Babakus and Mangold 1992, Gronroos, 1990). In addition to functional 
quality, various researchers have suggested that perceptions of overall quality may be 



156 



influenced by technical or outcome quality, the image of the service, previous satisfaction 
experiences, and consumers' personal and situation factors (Bolton and Drew, 1994; 
Gronroos 1990, Steenkamp 1990, Zeithaml, 1988). It is clear that more research is 
needed in order to create a multi-dimensional measurement tool that predicts overall 
perceptions of quality better than SERVQUAL or SERVPERF. Furthermore, given the 
many types of services and their inherent idiosyncrasies, it may not be possible to create a 
singularly valid instrument for evaluating perceived service quality. 

Nevertheless, the service quality literature contends that there is a strong 
relationship between perceived service quality and behavioral intention. The results of this 
study compliment these assertions. Thus, it appears that the more positive that callers feel 
about the service quality of the DIPRC, the more likely they are to have favorable 
intentions of calling the service again and recommending the service to a colleague. It is 
interesting to note, however, that in both this study and the study conducted by Cronin 
and Taylor (1992), the relationships between measures of overall quality and behavioral 
intention were stronger than the relationships between PSQ and behavioral intention. This 
suggests, perhaps, that perceptions of overall quality (i.e., overall impressions) are more 
important in driving fiature behavior than evaluations of quality based on specific aspects 
of the service. Again, however, further research is necessary to validate this assumption. 

Actual Service Time, Service Delays, Perceived Service Time and PSQ 

Hypotheses H4, H5, H6, and H7 were developed to test the relationships among 
actual service time, service delays, perceived service time, and PSQ. Actual service time 
referred to the actual amount of time (in minutes) required to respond to a question, and 
was measured by subtracting the "end time" for receiving a call from the "start time" for 
returning a answer. Service delay was a dichotomous variable referring to whether or not 
the response time was longer than the time needed (i.e., "stat", "today", "date", and "no 



157 



msh"). Perceived service time was measured by four items in the questionnaire (Q24 
"Acceptable Time", Q25 " No Longer Useful", Q26 "Quicker Response, and Q27 
"Expected Time"). 

One of the relationships posed by the literature was that actual waiting time was 
related to service evaluations (Buxton and Gatland, 1995; Hui and Tse, 1996; Davis 1991, 
Davis and Vollmann, 1990; Katz, Larson, and Larson, 1991; Larson, 1987; Tom and 
Lucey, 1995). This study did not find any statistically or practically significant relationship 
between actual service time and perceived service quality. Even if we were to assume 
statistical significance based on a borderline p-value (p=0.07), the regression equation 
suggested that, at best, PSQ was affected by actual service time only when one full day 
passed without a response. Practically significant shifts in PSQ did not occur until the 
equivalent of four or five days had lapsed in actual service time. Although surprising at 
first glance given the literature support, this lack of relationship makes sense in this setting 
if we consider the eight principles of waiting as proposed by David Maister (1985): 



Proposition 1. 
Proposition 2. 
Proposition 3. 
Proposition 4 
Proposition 5. 
Proposition 6. 
Proposition 7. 
Proposition 8. 



Unoccupied Time Feels Longer than Occupied Time 

Pre-process Waits Feel Longer than In-Process Waits 

Anxiety Makes Waits Seem Longer 

Uncertain Waits Are Longer than Known, Finite Waits 

Unexplained Waits Are Longer than Explained Waits 

Unfair Waits Are Longer than Equitable Waits 

The More Valuable the Service, the Longer the Customer Will Wait 

Solo Waits Feel Longer than Group Waits 



As determined in chapter 5, the pre-process time required to take a call was 
approximately four minutes and only about 12% of the calls are "stat" questions needed 
within fifl:een minutes (Tables 5-10 and 5-15). Therefore, since the pre-process waits are 
short and there is usually no particular immediate rush to complete questions, the effects 
of propositions one and two above are limited. Furthermore, callers do not actually wait 
in the DIPRC for their questions to be answered, therefore they do not wait in an "actual" 



158 



queue. Since there is no time spent waiting on hold, solo waits (proposition eight) and 
unfair waits (proposition six) are not really applicable to this setting. In addition, while the 
DIPRC is working on the question, the caller is probably often occupied with other duties 
and tasks, which limits the effect of proposition one. Finally, the callers tend to perceive 
the service provided by the DIPRC as highly valuable (proposition seven). This is 
evidenced by generally high PSQ and OSQ scores, as well as the distribution of responses 
to questions 29 and 30 in the questionnaire which asked callers to rate the how useful and 
essential the service was to them. Approximately 97% of the respondents agreed to some 
extent that the service was useflil to them, and about 81% of the respondents agreed to 
some extent that the service was essential. Therefore, based on Maister's propositions, 
there are clear reasons not to expect a significant relationship between PSQ and actual 
service time. 

Keeping with Maister's propositions, it is interesting to note that service delays did 
have a significant effect on perceived service quality, which is consistent with proposition 
five. Furthermore, this result was consistent with reports from Taylor (1994a), Taylor and 
Claxton (1994), and Dube'-Rioux et al. (1989). The results of this study suggest that 
callers who experience delays in service have lower perceptions of service quality than 
callers who do not experience delays in service. It should be noted, however, that the 
relationship between delays and PSQ was not very strong (r = 0.235). The strength of this 
relationship, however, is consistent with effect sizes reported by Taylor (1994a), who 
suggests that there are variables (such as anger, uncertainty, and perceived punctuality) 
that mediate the relationship between delays and service evaluations. In addition, 
Dube'Rioux et al. (1989) suggested that perceived necessity may also mediate this 
relationship. 

Among the three "time related" measures (actual service time, service delays, and 
perceived service time) perceived service time had the strongest relationship with PSQ, 
suggesting that callers' perceived service times are more important in determining their 



159 



evaluation of PSQ (and for that matter OSQ) than actual service time and service delays. 
This contention is supported in the literature by Taylor (1994a), Katz, Larson, and Larson 
(1991) and Hornilc (1982, 1984). Thus, while this study was originally focused on 
operational relationships between actual service times and service delays, it may be that 
more emphasis should be placed on the management of perceived service times. 

From the literature we can derive seven recommendations to managers of drug 
information services attempting to improve perceived service times. First, when notifying 
callers of the expected wait duration, it is better to overestimate the wait than to 
underestimate the wait; however, do not provide estimations that are longer than the 
consumer is willing to wait otherwise they may balk from the system. Therefore, it is 
important to know what callers consider an acceptable wait (Taylor 1 994a; Katz, Larson 
and Larson, 1991 ; Hornik, 1994). Second, shorten pre-process waits (i.e., the amount of 
time required to take information) as much as possible (Katz, Larson, and Larson, 1991; 
Dube'-Rioux et al. 1989). Third, distract attention from the duration of the wait (Katz, 
Larson, and Larson, 1991). This can be done for questions that are taking an inordinate 
amount of time to research by updating the caller with the current progress. 

Fourth, when service is delayed past the estimated time, it is important to 
apologize for the failure, but it is equally important to sound sincere since insincere 
apologies can actually lower satisfaction (Clemmer and Schnieder, 1993). Fifth, when 
service is delayed past the estimated time, it is often important to explain to the caller why 
the delay occurred or they might infer their own reasons for the delay (Taylor, 1994a). 

Sixth, assess caller attitudes, evaluate their time pressures, and handle first those 
callers with the greatest perceived need (Katz, Larson, and Larson, 1991; Dube'-Rioux et 
al. 1989). Seventh, provide callers information regarding peak demand times, so that they 
know when to expect longer waits (Katz, Larson, and Larson, 1991, Clemmer and 
Scheider, 1993). 



160 
Service System Simulation 

The DIPRC at the University of Florida was simulated using the GPSS/H 
simulation program. The simulation was verified using three methods: (1) traces of the 
simulation program code using the interactive debugger, (2) tests of logic relationships 
(i.e., model behavior), and (3) a limited form of the simulation was compared to analytical 
results based on an M/M/3 queue. In addition, the simulation was validated using three 
methods: (1) face validity was established by simulation walk-throughs with DIPRC co- 
directors, (2) expected simulation behavior was evaluated using extreme condition tests, 
and (3) the results of the simulation were compared to data collected from the actual 
system. The simulation passed all steps of the verification and validation process. It was 
therefore determined that the simulation was an acceptable decision making tool for use in 
the DIRPC. 

The simulation was used to evaluate three specific research questions. The first 
question asked how changes in staffing levels and service rates affect total service times 
and delays in service. These changes were compared to the "normal" level of operation, 
(i.e., simulation runs using observed arrival and service rates and a staffing level of three 
students). In terms of staffing levels, the simulation resuhs suggest that a staffing level of 
less than three students is simply inadequate to handle the current question volume in the 
DIPRC. However, there seemed to be significant improvements in simulated total service 
times and service delays, when the staffing level was increased from three to four students. 
Conversely, the difference in service delays and total service times between four and five 
students were not dramatic, indicating that three or four students is probably an adequate 
staffing level for the DIPRC. 

Changes in service rates were conducted by manipulating the research and 
approval times programmed into the simulation code based on the observed data. The 
simulation resuhs indicated that, as expected, increases in research and approval times 



161 



increased both total service times and service delays while decreases in research and 
approval times decreased both total service times and service delays. However, when 
comparing reductions in total service time and delay percentage to reductions in percent 
utilization, it appeared that service rate improvements were more efficient than adding 
servers for reducing total service times and service delays. 

The second question asked what combination of changes in staffing levels and 
service rates optimizes the system for delays in service and total service time. Largely, the 
results of the effectiveness ratios mirrored what was anticipated from the results of the 
first research question. The results of the simulation indicated that when trying to reduce 
the percentage of service delays, reductions in service rates were more important than the 
addition of servers. This result was consistent with research from Carruthers (1970), Chin 
and Sprecher (1990), Kumar and Kapur (1989), and Ozeki and Ikeuchi (1992). 
Interestingly, however, for reductions in total service time, appropriate staffing levels were 
at least as important as reductions in the time to research and approve questions, which 
was consistent with research done by Lamy et al. (1970). For reducing service delays, the 
optimal staffing level was three students, and for reducing total service times, the optimal 
staffing level was four students. However, in both cases improving service rates alw/ays 
improved efficiency. 

The third question asked how sensitive the optimal solution was to changes in the 
arrival rates. Since fluctuations in the rate of arrivals can dramatically affect many queuing 
systems (McClain and Thomas, 1985), it was important to identify thresholds at which the 
optimal service capacity determined by question two above were affected. The optimum 
solutions tended to be relatively stable. Call volume would have to increase by 20% to 
25% or decrease by 10% to 20% for the solution to change. In practical terms, these 
results indicated that an increase in volume of approximately 53 to 67 calls per month (i.e., 
average of 266 calls per month * 0.25 = 66.5), would reflect a need to add an additional 



162 



student. A decrease in call volume of approximately 27 to 53 calls per month (i.e., 266 * 
0.20 = 53.2), would allow for a decrease in the number of students staffing the center. 
In a debriefing session with the co-directors and the drug information resident, 
four potential methods for reducing service times were suggested. The first method 
involved the use of an online database to store caller contact information along with 
answers to questions posed. This improvement was deemed to have the most potential for 
improving the response time of the DIPRC. Furthermore, this database had already been 
slated for implementation and was beginning the initial stages of the design process at the 
time of this project. This database is expected to reduce the amount of time required take 
calls since repeat callers will presumably already have their information entered into the 
database. Also, since many questions tend to be repeated over the course of time, 
previous responses can be retrieved and reused, which could substantially decrease 
research times for many questions. The second method involved the encouragement of 
students to use the pager system to contact co-directors to approve responses when they 
are out of the office. Although there is usually someone present in the DIPRC who can 
approve responses, occasions arise when all persons eligible to approve answers are out. 
More effective use of the pager system will prevent high priority questions (e.g., "stat" 
and "today") from being delayed because the student is waiting for approval. The third 
method proposed during this debriefing would be to develop a hierarchy of approvals, so 
that for certain types of questions (such as phone numbers for drug companies and drug 
identification questions) students may give a response without prior approval. The fourth 
method suggested concerned the addition of several computers that would allow students 
more immediate access to word-processing and on-line information resources without 
having to wait for other students to finish. It was also noted that once the online database 
has been implemented, it will be crucial for every student to have access to a computer. 
The overriding question that remains is the issue of whether or not simulation is 
usefiil as a tool for understanding complex service systems, such as drug information and 



163 



other pharmacy systems. The answer to this question seems to be a qualified "yes". Once 
the simulation for this study was constructed, verified, and validated, the simulation 
provided a relatively easy means of exploring the effect of service capacity changes on 
various queue statistics (i.e., W, Wq, L, Lq, p, and delays). Clearly this attempt at 
simulating a drug information service has demonstrated the utility of simulation, and with 
the current health care environment pushing towards cost containment, resource 
efficiency, and total quality management, there seems to be a need for tools that can help 
decision makers evaluate and design (or redesign) complex service systems. It must be 
considered, however, that months of background research and development went into the 
construction, verification, and validation of the simulation used in this project. As Schriber 
(1991) states, "A simulation should be started well before the results are needed. In 
practice, unfortunately, the results of simulation are usually needed 'yesterday'" (p. 7). 
Unfortunately, the length of time necessary to develop good simulation models may not 
always be appropriate; therefore, simulation may not always be the most appropriate tool 
for systems analysis. 

Conclusions 

Four conclusions were derived from the results of this study. First, based on 
results from this research and research done by others, there appears to be a clear 
relationship between evaluations of perceived service quality and evaluations of overall 
service quality; however, scales based on derivatives of the SERVQUAL scale (such as 
SERVPERF and the PSQ instrument used in this study) seem limited in their ability to 
capture perceptions of quality beyond functional quality. 

Second, perceptions of drug information service quality are related to the callers' 
behavioral intentions, suggesting that when callers feel that the service delivers high 



164 



quality service, they will not only use the service again, but will also recommend the 
service to a colleague when given the opportunity. 

Third, drug information service providers should be concerned with both delays in 
service and perceived wait times, since both appear to have an affect on PSQ. Actual wait 
times do not seem to be related to PSQ. Therefore, managers should use operations 
management approaches to reduce the frequency of service delays (i.e., increase the 
service time and/or add servers) and they should use perceptions management approaches 
(e.g., expected wait information, apologies, etc.) to improve the experience of waiting. 

Fourth, based on the simulation results, it appears that for this setting, reducing the 
service rates is a more eificient method of improving the percentage of delays than 
increasing the number of servers. However, if total service time were to be considered an 
important indicator, then increases in staffing become at least as important as reductions in 
research and approval times. 

Limitations of Study 

There were four study limitations that deserve consideration. First, the design of 
the questionnaire portion of this project was exploratory and non-experimental, which 
makes the internal validity of the results difficult to determine. In addition, only one 
measurement of perceived service quality, overall service quality, perceived service time, 
and behavioral intention was made for each subject. Therefore, since unmeasured external 
variables could have mediated or intervened in the actual relationships, the resuhs reported 
in this study could be spurious. However, the fact that the relationships observed were 
consistent with those reported by Cronin and Taylor (1992) and other researchers helps to 
confirm the direction and strength of the observed relationships. In addition, since the 
research design was not experimental (or quasi-experimental), the existence and direction 
of causality could not be assumed. 



165 



Second, because of time and cost constraints, the study data were not drawn from 
a random sample of callers. Therefore, it is possible that some of the results may have 
occurred due to sample idiosyncrasies. However, the characteristics of the concurrent 
sample were similar to the characteristics of observed from the historical sample. Also, 
there was no reason to suspect that the responses given on the questionnaires were 
dependent on one another. Therefore, there was no reason to suspect that different resuhs 
would have been reached using a truly random sample. 

Third, it was not possible to use this simulation to analytically solve for the 
"true" optimum state (as is possible with other analytical tools used in operations research, 
such as linear programming). Therefore, the optimum solution presented in this study is 
only the best choice selected from the simulated experiments. If other variables of interest 
were to be studied and/or different inputs were given to the model, then the resuhs may 
differ. Furthermore, since actual manipulations of the system were not conducted, it is not 
possible to know for sure whether or not the results of the simulation are truly accurate 
when actual changes are made to system inputs (e.g., staffmg level, service rates, and 
arrival rates). 

Fourth, the simulation program itself may not be generalizeable to other drug 
information settings, given that other centers may have different work patterns, different 
calling populations, and varying responsibilities. Consequently, the specific resuhs (i.e., 
queue statistics) generated by the simulation model created in this project are not 
necessarily generalizable to other drug information services. Also, since this study is the 
first to use simulation to examine the effectiveness of drug information services, there are 
no studies available to compare the simulated results. 



166 
Recommendations for Future Studies 

Use of Simulation in Other Pharmacy Settings 

One of the primary goals of this study was to determine if simulation could 
effectively be used as a decision making tool in pharmacy systems. While this study has 
made a significant step in that direction, there are still questions to be answered. One of 
the initial reasons that the drug information service setting was selected was because of 
the relative simplicity of this system versus other pharmacy systems. This study indicated 
that simulation was a powerful decision making tool for analyzing service systems; 
however, it takes considerable time and effort to adequately construct, verify, and validate 
a simulation. Therefore, its useflilness in more complex systems is still at issue. In the 
researcher's opinion, there are at least three pharmacy settings where this simulation 
methodology could be implemented next with immediate impact in the literature. 

First, an interesting use of simulation would be to study the effects of workload on 
medication error rates in a hospital or community pharmacy. This research might answer 
several questions. What is the workload/staffing balance in order to minimize error rates? 
How do the contributing factors of errors (e.g. workload, workflow, fatigue, and human 
factors) interrelate? Can standards be set for workload limits? 

Second, one of the most frequently mentioned barriers to the implementation of 
pharmaceutical care in community pharmacies is lack of sufficient staff and/or time to 
provide services. Simulation could help provide practitioners with reahstic expectations as 
to how many pharmacists/technicians are needed to provide pharmaceutical care services, 
how/when patients could be scheduled, and how changes in workflow could improve 
efficiency of the current staff. 

Third, in previously published research by the researcher (Halberg et al., 1996) 
involving the use of anesthetics in an ambulatory care setting there were differences 



167 



discovered among anesthetics in terms of both time and cost factors. However, two 
Hmitations of this study were: (1) it did not account for labor costs of nursing staff and 
anesthesiologist time because the difficulty of tracking this information; (2) small sample 
sizes restricted the power of the statistical tests used to detect differences in anesthetics. 
If the system of surgery could be simulated with some accuracy then more precise 
pharmacoeconomic judgements could be made concerning the use of anesthetics. This 
method could also extend itself eventually to other pharmacoeconomic choices where 
sample sizes are low or when costs of collecting data are restrictive in terms of dollars or 
time. 

Assessment of the Service Quality of DruR Information Services 

Another goal of this study was to estabHsh the relationships between perceived 
service quality and actual service time or delays in service. Unfortunately, the 
relationships observed were weaker than expected. Interestingly, however, perceived 
service time was related to perceived service quality with some significance. Although 
strong relationships may not actually exist between perceived service quality and actual 
service time or delays in service in the drug information service setting, there is still 
considerable room for research in this area, especially in pharmacy systems (such as 
community pharmacy) where lay persons rather than professionals are the primary 
consumers. 

First, it may be that the SERVQUAL and SERVPERF instruments developed by 
Parasuraman et al. (1988) and Cronin and Taylor (1992) are not adequate for measuring 
the perceived service quality for professional services, such as a drug information service. 
The distinction between consumers and professionals with regard to perceived service 
quality and the development of a more useflil scale for use in assessing the perceived 



168 



service quality of professional services would fill a gap in the evolving service quality 
literature. 

Second, one of the theoretical distinctions between perceived service quality and 
consumer satisfaction is that customer satisfaction is typically concerned with only a single 
service encounter, and perceived service quality is reflective of all experiences and 
impressions a consumer may hold. It may be that consumer satisfaction/dissatisfaction 
(CS/D) measures behave differently than perceived service quality measures with respect 
to actual service time and service delays. Therefore, it may be more appropriate to relate 
actual service time to some measure of customer satisfaction rather than perceived service 
quality. Beyond CS/D, other frameworks such as perceived value may obtain more useful 
results for combining aspects of operations research with perceptions management. 

Third, beyond the work done by Hornik (1982 and 1984) and Katz, Larson, and 
Larson (1992), it seems that very little research has been done to ascertain the 
relationships among perceived service time and actual service times as it applies to service 
settings. If this relationship can be more accurately described, then it may create 
additional avenues for managers who desire to improve quality using operations 
management techniques. 



APPENDIX A 
TEXT OF PRE-TEST COVER LETTER 



June 6, 1997 



Dear Colleague, 

The Arkansas Poison and Drug Information Center (APDIC) at UAMS 
College of Pharmacy is currently working toward improving the quality of 
the services we provide. In order to achieve this goal, we have decided to 
ask our most recent callers some specific questions regarding our service. 

You can help us by participating in a brief survey concerning your recent 
experience(s) with the APDIC. Your prompt response is very important to 
us. You are one of only a small number of practitioners who are being 
asked to give their opinions about our service, so it is critical that each 
questionnaire is completed and returned. 

Please answer all of the questions in this questionnaire (it should only take 
between 5-10 minutes to fill out). When complete, refold the questionnaire 
and tape it closed. A stamp has been placed on the questionnaire for your 
convenience. You may be assured of complete confidentiality. The 
questionnaire has an identificafion number for mailing purposes only. This 
is so we may check your name off the mailing list when your questionnaire 
is returned. Your name will never be placed on the questionnaire itself, nor 
will your responses be linked to you personally during the analysis. 

Thank you for your participation. 

Sincerely, 



Charles S. Campbell, P.D. 

Director, Arkansas Poison and Drug Information Center 



Daniel L. Halberg 

Doctoral Candidate, University of Florida 



169 



APPENDIX B 
PRE-TEST QUESTIONAIRE - VERSION ONE 



DIRECTIONS: Please answer the questions to the best of your ability by placing a 
checkmark in ONE of the boxes next to each question. There are four sets of 
questions: (1) some basic information about you; (2) your feelings about the 
Arkansas Poison and Drug Information Center (APDIC) at the University of 
Arkansas for Medical Sciences; (3) your perceptions regarding the amount of 
time the APDIC took to fulfill your request; and, (4) your overall feelings regarding 
the APDIC and your impressions about future behaviors regarding the APDIC. 

PART I: The following two questions gather some information about you. This 
information will be used in conjunction with the information given in the rest of 
the questionnaire to assess how needs and perceptions differ among our callers. 

(1 ) What is your profession? 

□ RPh/Pharm.D. _l Physician IJ Nurse/Nurse Practitioner J Other: 



(2) How often do you use the APDIC? 

Q First time user D 3-5 times per year □ 10-15 times per year 

Q 1-2 times per year □ 5-10 times per year i_| more than 15 times per year 

PART II: The following set of statements relate to your feelings about the 
Arkansas Poison and Drug Information Center (APDIC) at the University of 
Arkansas for Medical Sciences. For each statement, please check the box that 
best describes the extent to which you believe the APDIC has that characteristic. 
The range of selection varies from "Strongly Agree" to "Strongly Disagree"; 
however, you may check any of the boxes provided. If you feel that you cannot 
answer a question, or that the question does not apply to you, you may check the 
box labelled "Don't Know". 



.e-^ 



^ J" c^^ d^^" 

^^ r^ qs* .J^ 

V" *• ^ C!" 

^<r ,f <<>* ^^ /^ ^# o"^ 

(3) The APDIC has the equipment and ^^ / "^ .'='' ."^ ."^^ A .'^ 
information resources necessary to ' I I I I I I 
answer my questions rj -j q q [j q -j 

(4) When I call the APDIC, background noise 
on their end interferes with my ability to 

communicate over the telephone j j [j (j y y y 

(5) When I receive written materials from the 

APDIC, they are clear and easy to read . . J j Q □ Q Q □ 

(6) Employees of the APDIC speak in a 
manner that is easy to understand y y 



!_J !J □ □ L) u U 

(7) When the APDIC promises to do 

something by a certain time, it does so . j j □ Q Q ly □ 



(OVER) 



171 



172 



.» 






(8) When I have a problem, the APDIC is 1111111 
sympathetic and reassuring q ^ ^ q q j ^ 

(9) The APDIC is dependable y q q □ □ q j 

(10) The APDIC provides its services in the 
time it promises y [j q |_| y q ^ 



(11) The APDIC does not give me individual 
attention 



Q Q Q □ □ a Q 



(12) The APDIC does not tell me exactly when 

services will be performed q y y y y y y 

(13) The APDIC keeps its records accurately . Q Q □ y LI □ □ 

(14) I do not receive prompt service from 

APDIC employees y y y y y y y 

(15) Employees of the APDIC are not always 

willing to help me y y y y y y ^ 

(16) Employees of the APDIC are too busy to 

respond to caller requests promptly .... :j □ Q Q ij ij ij 



J G □ Q Q J Q 



(17) I can trust employees of the APDIC 

(18) Employees of the APDIC do not give me 

personal attention y y y y y ,-. ^ 

(19) Employees of the APDIC are polite 0:::] 

(20) I feel safe in my interactions with the 

APDIC employees y y y y y y ^ 

(21) Employees get adequate support from the 

APDIC to do their jobs well y y y y y y ~^ 

(22) Employees of the APDIC do not know 

what my needs are y y y y y y , 



173 



(23) The APDIC does not have my best 
interests at heart 



o, ^ 4 J 



^° ^ op^ ^^ ^o^^ ^^ ^^° 

((((((( 
IJ □ U U □ U J 



(24) The APDIC does not have operating 
hours convenient to me 



□ □ J Q a G Q 



PART III: The following questions regard your perceptions about the length of 
time in which the service was rendered. Please think about the next four items in 
terms of the last question you presented to the APDIC, and react to the 
statements below using the scale provided. Again, you may check any of the 
boxes on the scale to show how strong your feelings are. 

(25) The amount of time that it took the APDIC ^"^ ("^ (''^° f"^ ("^ (^ f'^ 
to respond to my most recent question I I ( i I I I 
was acceptable q q q Pj ^ |-| [j 

(26) By the time I received a response from 
the APDIC, the information was no longer 

useful to me □ □ q q q ^ ^ 

(27) I wish the APDIC could provide a quicker 

response to my questions j [j q y y ^ -j 

(28) The amount of time that it took the APDIC to respond to my most recent question 
was 

G MUCH SHORTER THAN I EXPECTED Q LONGER THAN I EXPECTED 

Q SHORTER THAN I EXPECTED Q MUCH LONGER THAN I EXPECTED 

Q EQUAL TO WHAT I EXPECTED 



(OVER) 



174 



PART IV: The following seven statements relate to your overall feelings about the 
APDIC. Please respond by checking the box w/hich best reflects your own 
perceptions. 

(29) The overall quality of the services provided by the APDIC is best described as 

Q Excellent □ Very Good □ Good □ Fair □ Poor J Unacceptable 

y ^"^ / ^^ 

(30) The responses I receive from the APDIC I I I I I I I 
are useful to me In my practice □ q 3 -j q q q 

(31) The responses I receive from the APDIC 

are essential to me in my practice q u q q q q q 

(32) It is important that the APDIC fax me the 
supporting documents (e.g., recent 
literature) for their answers to my 

questions y ^ ^ ^1 IJ U G 

(33) The APDIC's answers to my questions are 

used to improve patient outcomes q q q q q q q 

(34) I intend to use this service In the future. . □ Q □ rj j j rj 

(35) I would recommend this service to a 

colleague □ □ □ q r-, .j u 

(36) ADDITIONAL COMMENTS; Is there anything else that you would like to tell us about 
your experience(s) with the APDIC? Also, any comments you wish to make 
regarding how we could improve our service will be appreciated, either here or in a 
separate letter. 



Thank you very much for your help. 



APPENDIX C 
PRE-TEST QUESTIONAIRE - VERSION TWO 



DIRECTIONS: Please answer the questions to the best of your ability by placing a 
checkmark in ONE of the boxes next to each question. There are four sets of 
questions: (1) some basic information about you; (2) your feelings about the 
Arkansas Poison and Drug Information Center (APDIC) at the University of 
Arkansas for Medical Sciences; (3) your perceptions regarding the amount of 
time the APDIC took to fulfill your request; and, (4) your overall feelings regarding 
the APDIC and your impressions about future behaviors regarding the APDIC. 

PART I: The following two questions gather some information about you. This 
information will be used in conjunction with the information given in the rest of 
the questionnaire to assess how needs and perceptions differ among our callers. 

(1) What is your profession? 

□ RPh/Pharm.D. □ Physician J Nurse/Nurse Practitioner J Other; 



(2) How often do you use the APDIC? 

□ First time user J 3-S times per year □ 10-15 times per year 

U 1-2 times per year □ 5-10 times per year U more than 15 times per year 

PART 11: The following set of statements relate to your feelings about the 
Arkansas Poison and Drug Information Center (APDIC) at the University of 
Arkansas for Medical Sciences. For each statement, please check the box that 
best describes the extent to which you believe the APDIC has that characteristic. 
The range of selection varies from "Strongly Agree" to "Strongly Disagree"; 
however, you may check any of the boxes provided. If you feel that you cannot 
answer a question, or that the question does not apply to you, you may check the 
box labelled "Don't Know". 



.<^ 



/ / / .A 

(3) The APDIC has the equipment and ."^ ."^ ."^^ .^ .'^° ,<>^ /^ S^"" 

Information resources necessary to I I I I I I I f 



answer my questions 

(4) When I call the APDIC, background noise 
on their end interferes with my ability to 
communicate over the telephone 

(5) When I receive written materials from the 
APDIC, they are clear and easy to read . 

(6) Employees of the APDIC speak in a 
manner that is easy to understand 



JZIQQQUUU 

□ ij u u u □ □ □ 

u u □ □ □ -J J a 
CJ Q u u u □ □ □ 



(7) When the APDIC promises to do 

something by a certain time, it does so . U U CJ □ □ □ □ 



(OVER) 



176 



177 



J, .«? ,4 J" 

(8) When I have a problem, the APDIC is \ \ \ \ ( ( ( ( 
sympathetic and reassuring q q q q q [j |j p 

(9) The APDIC is dependable q q q q q q q q 

(10) The APDIC provides its services in the 

time it promises q q q q y ij y y 

(11) The APDIC does not give me individual 

^"^"*'°" Q Q Zl 3 Q Q Q Q 

(12) The APDIC does riot tell me exactly when 

services will be performed y y y y y y y y 

(13) The APDIC keeps its records accurately Q Q Q j ^ ij [j y 

(14) I do not receive prompt service from 

APDIC employees aQQQuaQa 

(1 5) Employees of the APDIC are not always 

willing to help me y y y y y y y ^ 

(1 6) Employees of the APDIC are too busy to 

respond to caller requests promptly ... □ a Q Q □ □ n i j 

(17) I can trust employees of the APDIC u J G Q Q j Q a 

(18) Employees of the APDIC do not give me 

personal attention y y y y ^ ^ -j ,_| 

(19) Employees of the APDIC are polite □□QQQQDa 

(20) I fee! safe in my interactions with the 

APDIC employees Q Q U □ U Q Q □ 

(21) Employees get adequate support from 

the APDIC to do their jobs well Q □ Q u □ Q G j 

(22) Employees of the APDIC do not know 

what my needs are y y ^ ^ ^ ^ ^ ^^ 



178 



(23) The APDIC does not have my best 
interests at heart 



(24) The APDIC does not have operating 
hours convenient to me 






<r 



¥" ^ 



(((((((( 

U U U G Q Q □ Q 



□ □QUGQQQ 



PART III: The following questions regard your perceptions about the length of 
time in which the service was rendered. Please think about the next four items in 
terms of the last question you presented to the APDIC, and react to the 
statements below using the scale provided. Again, you may check any of the 
boxes on the scale to show how strong your feelings are. 

^ i-^ j<r off ^ 

(25) The amount of time that it took the .'^ r"^ r'^° c"^^ {"^^ r^^ ("^ r^° 
APDIC to respond to my most recent I 1 I I I I I I 
question was acceptable ^ j q q q ^ pj ^ 

(26) By the time I received a response from 
the APDIC, the information was no longer 

useful to me ^ -, ,-, ^ ^ ^ ^ ^ 

(27) I wish the APDIC could provide a quicker 

response to my questions u □ q q q -, -j ^ 

(28) The amount of time that it took the APDIC to respond to my most recent question 
was 

a MUCH SHORTER THAN I EXPECTED Q LONGER THAN I EXPECTED 

LJ SHORTER THAN I EXPECTED Q MUCH LONGER THAN I EXPECTED 

Q EQUAL TO WHAT I EXPECTED 



(OVER) 



179 



PART IV: The following seven statements relate to your overall feelings about the 
APDIC. Please respond by checking the box which best reflects your own 
perceptions. 

(29) The overall quality of the services provided by the APDIC is best described as 
□ Excellent □ Very Good G Good □ Fair □ Poor □ Unacceptable 



,.<J AS* ,# 



<^r vs> <F ^ 

^ ^^ if *»■' «,<■* &^ #° %»■' 

(30) The responses I receive from the APDIC I I i I I I I [ 
are useful to me in my practice q q q q [j q q -^ 

(31) The responses I receive from the APDIC 

are essential to me in my practice □□□□□□□"! 

(32) It is important that the APDIC fax me the 
supporting documents (e.g., recent 
literature) for their answers to my 

<^^^^^'oris □ [J [_, _j ^ ^_j ^ _j 

(33) The APDIC's answers to my questions 

are used to improve patient outcomes .. □□□□□□□□ 



(34) I intend to use this service in the future. 



(35) I would recommend this service to a 



□ □ □ □ □ IJ IJ □ 



^°^^^^3ue □□□□□□□□ 



(36) ADDITIONAL COMMENTS: Is there anything else that you would like to tell us about 
your experience(s) with the APDIC? Also, any comments you wish to make 
regarding how we could improve our service will be appreciated, either here or in a 
separate letter. 



Thank you very much for your help. 



APPENDIX D 
PRE-TEST FOLLOWUP POSTCARD 



UAMS 



U[iivf t^iiy o!' ■\rk.vis^s 'or Mcdifrii 5cienc:tb 



COLLEGE Of PHARMACY 

4301 WestMarkham St., Slot 522- 

Little Rock, Arkansas 72205-7122 



(Side One) 



Dear Colleague; 

About four weeks ago, a questionnaire seeking your opinions about the service quality of the Arkansas 
Poison and Drug Information Center (APDIC) was mailed to you, 

Tf you have already completed and returned the questionnaire to us please accept our sincere thanks. 
If not, please do so today. Because it was sent to only a small sample of our recent callers it is 
extremely important that yours also be included in the study if the results are to accurately represent 
the feelings of the professionals we serve. 

Tf by some chance you did not receive the questionnaire, or it got misplaced, please call me at (352) 
392-9C35 and I will get another one in the mail to you. Your contribution to the success of this study 
will be greatly appreciated. 

Sincerely, 



Daniel L. Halberg 
Doctoral Candidate 



(Side Two) 



180 



APPENDIX E 
RESPONSES TO PRETEST QUESTIONAIRE VERSION ONE 





Version 1 


- "Don't Know" 


Response Value 


Excluded 






strongly 




Somewhat 




Somewhat 




Strongly 




Response 


Agree 


Agree 


Agree 


Neutral 


Disagree 


Disagree 


Disagree No 


Response 


Q-1 


57 


1 


2 


3 











1 


Q-2 


6 


1 


7 


8 


13 


28 





1 


Q-3 


25 


33 


1 


2 





1 


1 


1 


Q-4 


1 








1 





26 


36 





Q-5 


20 


30 


3 


3 


1 


1 





6 


Q-6 


34 


28 


1 


1 














Q-7 


28 


30 


4 


1 











1 


Q-8 


20 


22 


6 


14 











2 


Q-9 


32 


29 


2 


1 














Q-10 


29 


29 


4 


2 














Q-11 





1 





2 


2 


26 


33 





Q-12 


1 


3 


4 


4 


4 


25 


23 





Q-1 3 


11 


16 





27 


1 








g 


Q-14 





1 





1 


4 


24 


33 


1 


Q-15 











2 


3 


21 


38 





Q-16 


1 





3 


4 


1 


23 


32 





Q-17 


27 


25 


2 


4 





2 


2 


2 


Q-18 











1 


4 


26 


33 





Q-1 9 


39 


23 


2 








0. 








Q-20 


31 


30 


1 


2 














Q-21 


16 


24 


3 


17 











4 


Q-22 








2 


13 


3 


26 


19 


1 


Q-23 


1 








2 


3 


29 


29 





Q-24 








1 


1 


1 


28 


33 





Q-25 


30 


30 


4 

















Q-26 











1 


3 


29 


31 





Q-27 





4 


7 


16 


2 


17 


18 





Q-28 


8 


10 


45 


1 














Q-29 


31 


26 


1 


1 











5 


Q-30 


35 


23 


1 





1 








4 


Q-31 


31 


20 


6 


1 





2 





4 


Q-32 


19 


23 


8 


8 


1 





1 


4 


Q-33 


35 


22 


1 


1 


1 








4 


Q-34 


44 


16 

















4 


Q-35 


45 


14 


1 














4 



181 



APPENDIX F 
RESPONSES TO PRETEST QUESTIONAIRE VERSION TWO 





Version 


2 - "Don't Know" 


Response Value 


Included 






strongly 




Somewhat 




Somewhat 




Strongly 


Don't 


No 


Response 


Agree 


Agree 


Agree 


Neutral 


Disagree 


Disagree 


Disagree 


Know 


Response 


Q-1 


62 


1 


1 


4 














2 


Q-2 


5 


4 


6 


15 


10 


29 








1 


Q-3 


26 


41 














1 


1 


1 


Q-4 











2 


3 


24 


39 


1 


1 


Q-5 


17 


36 


6 


2 


2 








6 


1 


Q-6 


36 


33 




















1 


Q-7 


29 


32 


2 


2 


1 








3 


1 


Q-8 


21 


31 


4 


5 











8 


1 


Q-9 


31 


34 


4 

















1 


Q-10 


27 


35 


2 


2 


1 





1 


2 





Q-11 





1 








3 


28 


37 


1 





Q-12 


1 


3 


1 


4 


1 


28 


27 


5 





Q-13 


9 


14 


2 


9 








1 


34 


1 


Q-14 











1 


3 


16 


50 








Q-15 














1 


16 


53 








Q-1 6 


1 








1 


5 


15 


42 


5 


1 


Q-1 7 


27 


28 


5 


4 








2 


4 





Q-18 


1 


1 





1 


3 


18 


44 


2 





Q-19 


37 


32 


1 




















Q-20 


26 


38 


1 


1 











4 





Q-21 


16 


26 


3 


3 








1 


20 


1 


Q-22 





1 


2 


5 


4 


23 


27 


7 


1 


Q-23 





1 





2 


2 


25 


36 


4 





Q-24 


4 








4 





22 


33 


7 





Q-25 


34 


34 








1 





1 








Q-26 








1 





2 


24 


42 





1 


Q-27 





4 


7 


11 


6 


21 


18 


1 


2 


Q-28 


10 


20 


38 


2 





Q 











Q-29 


40 


23 


3 

















4 


Q-30 


42 


25 




















3 


Q-31 


30 


30 


6 


1 














3 


Q-32 


25 


18 


10 


9 





2 





2 


4 


Q-33 


40 


24 


2 














1 


3 


Q-34 


52 


15 




















3 


Q-35 


48 


17 





1 





1 








3 



182 



APPENDIX G 
PRETEST QUESTIONNAIRE WRITTEN COMMENTS 



Question 3: The APDIC has the equipment and information resources necessary to 
answer my questions. 

"Herbal products are an area of constant questions that good references are needed (none 
are even available besides Lawrence and Micromedex.)" 



Question 4: When I call the APDIC, background noise on their end interferes with my 
ability to communicate over the telephone. 

"No background noise." 



I Question 5: When I receive materials from the APDIC, they are clear and easy to 

1 read 



"N/A. Most of my info is over the phone. I have not received written material." 

"Xeroxes sometimes off center and cut off part of the page." 

"Don't Know." 

"We have not received any written material yet." 

"N/A" (2) 

"Just the faxes are hard to read." 

"Faxes sometimes hard to read. This is to be expected." 

"APDIC has offered. I've never needed." 



183 



184 

Questions 7: When the APDIC promises to do something by a certain time, it does so. 

"Have only asked for information very quick once." 

Question 8: When I have a problem, the APDIC is sympathetic and reassuring. 

"Professional." <Note: Was a written on the questionnaire as if to clarify the respondents 
interpretation of the question. > 

"This could be asked in a different way." 

Question 10: The APDIC provides its services in the time it promises. 

"Usually. Sometimes a small delay because of another emergency on a difficult question. 
They always explain the reason for the delay." 

Question 12: The APDIC does not tell exactly when services will be performed 

"If I ask when, they always tell me." 

"But I have only asked them for specific times." 

Question 13: The APDIC keeps its records accurately. 

"Don't Knov^/." (4) 

"Don't Know. I would assume that they do." 

"I don't know. I've never noticed any communication/relay problems." 

"Don't really understand the statement." 

"To the best of my knowledge." 



Question 16: Employees of the APDIC are too busy to respond to caller requests 
promptly. 

"Service isn't as fast as it used to be but certainly is still very timely and prioritized based 
on the request." 



185 

Question 1 7: I can trust employees of the APDIC. 

"Don't Know." 

Question 20: I feel safe in my interactions with the APDIC employees. 

"?" <Circled the word safe.> 
"Huh?" 



Question 21: Employees get adequate support from the APDIC to do their jobs well. 



"Don't Know." (2) 

"How would I know?" 

"Who? My employee or APDIC employee?" 

"The only hint I have is how well the employees answer my questions." 

"N/A" 

"Don't understand." 

Question 22: Employees of the APDIC do not know what my needs are. 

"They always readdress question if they didn't understand the focus." 
"Don't understand." 

; Question 27: I wish the APDIC could provide a quicker response to my questions. 



"They can't get any faster." 

"I think that is an unfair question. It always depends on what other questions they have 
pending. Callers need to understand and respect that. I have never been unsatisfied with a 
response time of an urgent question." 



186 

"Due to prior experience with them." <In explanation of their answer. > 
"In general, or just the last one?" 



Question 28: The amount of time that it took the APDIC to respond to my most recent 
question was 

"I always get great service!" 

"But APDIC was unable to FAX or mail the information I requested. I found this a little 
odd and it did not fully meet my needs for drug information!" 



Question 32: It is important that the APDIC fax me the supporting documents (e.g. 
recent literature) for their answers to my questions. 

"This tends to vary from situation to situation." 

"Undocumented answers are useless!" 

"If necessary, yes." 

"In most cases but not always." 

"FAX or mail." 

"N/A" 

"Very important." 

Question 35: I would recommend this service to a colleague. 

(Three checks on strongly agree) 
"And have done." 



187 



Question 36: ADDITIONAL COMMENTS: Is there anything else that you would like 
to tell us about your experience(s) with the APDIC? Also, any comments you wish to 
make regarding how we could improve our service will he appreciated, either here or in 
a separate letter. 

"They need to update Generic Listings for both Rx and OTC medications - identification 
and manufacturer." 

"I believe the APDIC is one of the best things the Pharmacy School does for the public. 1 
have never had any thing but good experiences when I call for drug or poison information. 
In fact, I called today and was given an answer to a poison question in 2 minutes. Keep 
up the good work. <NAMES OMITTED>." 

"Improve the quahty of your FAX equipment. Thank you!" 

"No, there is nothing else I would like to say, except that the service was great!" 

"The APDIC has come to our rescue many times - 1 can't say enough about the service! 
<NAMES OMITTED> are the greatest! Thanks!" 

"You do an outstanding job! I could not provide the level of care to my patients currently 
available without your support." 

"Thanks for being there for so many health care professionals." 

"I have been very satisfied with the type of service and the professionalism of the 
APDIC." 



"The information I was given was very useful and was given in a prompt manner. I would 
definitely recommend this service to a colleague." 

"Although budgetary restraints have impacted on all of us, I would encourage you and 
your library to expand your journal holdings as much as possible." 

"All APDIC people I have spoken with have been pharmacists - have the understanding of 
the situations line workers, clinical staff (pharmacy) and pharmacy administration staff are 
dealing with at the time of the call. Thanks and keep up the terrific service!" 

"I've been very satisfied with the service I've received and always have been deah with 
respectfiilly and promptly." 

"Keep up the good work!" 



188 



"A very well run operation. I know I can 'hang my hat' on APDIC. Great bunch of 
dependable, personable, knowledgeable professionals improving the overall health care for 
Arkansans." 

"They are great!" 

"Wish you could search abstracting databases such as EMBASE." 

"I can't think of a thing to improve. Everyone I have talked to is very knowledgeable, 
courteous and helpfial. They do an outstanding job of prioritizing questions and soliciting 
the information in order to focus the question correctly. Thank you for making my job 

easier!" 

"Excellent." 

"One minor thing - 1 like to know who I am talking to - 1 usually expect staff at an 
organization to identify themselves when answering." 

"The APDIC Employee tried to answer my question about a specific generic equivalency, 
but could not find an answer." 

"I began using APDIC when we became an Owen account late '96. So far I have used 
APDIC [approximately 2 times a month] <NAME OMITTED> Usually takes my calls. 
He does a great job in obtaining a response and getting back to me that same day and is 
good about letting me know how long it will take depending on the workload. Others I 
have interacted with have also provided great customer service." 

"I have used the APDIC several times and I have always found the staff to be courteous 
and well trained. I've received intelligent answers to my questions as well as supporting 
documents when needed." 

"Good job. Thank you." 

"I have been extremely happy with my service. There are many times I call just for 
information (not an emergency) to questions I get on various drugs, etc. For example, 
today a nurse called and wanted to know if MS Conten 30 mg. could be inserted rectally. 
She had heard that it could but the pharmacist she called didn't know. I called APDIC 
and got the information orally and by fax in a matter of minutes. It is great to know that 
questions like this can be answered. Keep up the good work. Thank you." 

"On the main points, very grateful for a useftil service. My only concern is I have no idea 
what resources were used to handle my queries." 

"I called recently and needed dosing information on several drugs (higher doses than I 
could find in any references). I needed this info stat (within 15 minutes) Mark called me 



189 



back within the 15 minutes with the info I needed. I really appreciate the promptness of 
your service. APDIC is always very helpful to me!" 

"We are very happy with this service." 

"Your staff is very courteous and professional and appear to enjoy learning from our 
questions and situations that we present." 

"The center is underutilized by Arkansas pharmacists." 

"I am very grateful for the competent staff and the service I have received in the past." 

"I find the staff of APDIC to be very knowledgeable, supporting, and confident in their 
work. Keep up the great work." 

"Overall the service is very good. I have submitted questions though that were never 
answered. If no answer could be found, it would be helpfiil if someone could call me back 
to let me know. Thank you." 

"They didn't tell me how long it would take to get info so I didn't know if it would be 5 
minutes or 2 days. Also, the faxed response was actually a little bit of overkill - 26 pages 
was quite a bit more than what I really needed." 

"I think the APDIC is a great resource. I don't know what I would do without it." 

"With the Owen Healthcare, Inc. contract it may be beneficial to get on MS E-mail with 
the pharmacy directors if you are not already on it." 

"Maybe ask when info is needed. I assume that since this is a poison control center that 
these [questions] would be given priority, though I have never called about a poison 
question as we have our own poison control center at OKC." 

"I have worked in a poison control service and you clearly have the resources needed and 
the personnel to do an excellent job." 

"We think the entire staff does a great job getting the information we need to us in a 
prompt manner. They are a great benefit to us." 

"1 feel that the APDIC offers the most important information service a health professional 
member has access to. If this service were not available my ability to practice my 
profession would be diminished." 

"Your staff is always very courteous and extremely valuable." 



190 



"I work with a number of directors and clinical managers. My impression is that you 
don't always ask the right questions to determine what the requestor really needs. You 
end up providing the right answer to the wrong question. Some people need help 
expressing what they really need." 

"I have always been satisfied with the help I received from poison control and the people 
have been easy to work with. We really appreciate what you do. Thanks for the good 
work." 

"My last request was concerning an OTC, I was surprised by the response and 10 faxed 
documentation. Thanks. Keep up the good work." 

"Great!" 

"You guys make me look good!" 

"Very courteous and helpfiil." 

"I use this service for tablet identification primarily! Love the cooperation we get. My 
customers think I'm very smart!" 

"I was very pleased with the response I received from my last call to the APDIC. The 
information was very helpful and was given in a timely manner." 

"Need more availability to literature -journals, etc. Often items have been 'checked out' 
so a copy cannot be obtained. More comparative information." 

"Thank you for all the help." 

"This organization is one that I use consistently and am very happy to have this service 
available to me." 

"Great group of professionals." 

"They are an excellent source of both information from the literature and practice 
standards since most individuals interact with physicians in the medical facility. 
Pharmacists [at the APDIC] are excellent clinicians. 

"My interactions with APDIC have all been over the phone. Every time the staff have 
been very helpfiil and polite. Most of my questions haven't had an emergent response 
time but when I needed the quick response they [have] been good. In my previous 
position I was physically asked to go to a DIC which was nice. We have a contract with 
your department and what might be handy is a listing of the various services you offer and 
maybe a list of databases you use. Botton line, yes your department is very useful and 
should be flanded fiilly with resources and staff." 



191 



"I've used and have interacted with dmg information services at UAB, Sanford University 
as well. No one compares to UAB's quality and expertise in providing drug information. 
You should try to model ALL aspects of your center after UAB's drug information 
center." 

"With the exception of one occurrence about a year ago, I've had nothing but fantastic 
results and also [the employees at the APDIC are] very capable and eager to help people. 
I always know they are right on it and not doing a haphazard job - they're very polite, 
want to help, and just plain get it done. I recommend them all of the time." 

"It's good to know you're there behind me when I need you. Thanks." 
"Good service." 



APPENDIX H 
HISTORICAL DATA SHEET 



FILE # 



DRUG INFORMATION SERVICE DATA SHEET 





INFORMATION REQUESTED 















Type of Question (Classification) 
How did this question come about? 



Requestor 



Rph 



MD 



RN 



Other 



Subscriber # : 
Phone # : ( 



Non-Subscriber : 
Extention: 



Address : 



Zip Code : 



Type of Response 


Requested: 


Written 


Oral 


Both 


Either 



FILE : 
FILENAME : 



Yes 



No 



Information Reviewed By: 



Health Center : 
Pager # : 



Response Needed by : 
Stat (<15 min) 
Today 
Date: 



No Rush! 



Date Received : 

Time Received: 
Received By: 
Date Completed: 

Time Conpleted: 

Returned By : 



193 



APPENDIX I 
DATA COLLECTION FORM 



FILE# 



DRUG INFORMATION SERVICE DATA SHEET 





INFORMATION REQUESTED 















Type of Question (Classification): 
How did this question come about? 



Requestor: 
Phone #: ( 

Extention: 
Fax#: ( 



Address: 



Zip Code: 



Pager #: 



Type of Response Requested 
n Written D Oral 

n Botii n 



Either 



Response Needed by: 

n Stat (<1 5 minutes) 
I I Today Time: 

n Date L 

D No Rush! 



Subscriber #: | | 

I I Non-Subscriber 

HH Health Center 



n RPh/Pharm.D. 

n MD 

n RN/NP 

n other; 



DATE AND TIIVIE TRACKING 



Activity Codes: 1 = Reception of Call or Fax 

2 = Obtaining Information and Writing Answer 

3 = Approval 

4 = Returning Answer to Caller 



Activity 


Person 


Start Date 


Start Time 


End Date 


End Time 















































































































Response Completed By: 
Response Approved By: 



195 



APPENDIX J 
SEMI-STRUCTURED OUTLINE FOR STUDENT INTERVIEWS 

/. Introduction 

1 . Introduce self. 

2. Explain the purpose of the interview and describe the issues that will be discussed. 

3 . Explain that their names will never be attached to anything said during the 
interview. 

4. Ask permission to audio record the interview for recollection purposes. 

//. Understanding the Service System 

Please describe your job here at the DIPRC. 

Additional follow-up questions: 

a. What jobs are all of the students responsible for completing? 

b. Are there any jobs that only you are responsible for completing? 

c. Besides what we have talked about above, what other activities occupy your 
time during the week (e.g. lunch, meetings)? 

d. How long do you usually take for lunch? Are you available to take calls during 
your lunchtime? 

e. What hours is the DIPRC open to take calls? How often do your morning 
meetings disrupt these times? 

I would like you do describe the work process here at the DIPRC. Explain to me what 
happens from the arrival of a question until an answer is returned to the caller. 

Additional follow-up questions: 

a. Do you fill out a sheet for each question asked? 

b. Do you answer the telephone if you are already working on a question and are 
in the office? 

c. How often do you usually work on more than one question? What is the most 
you ever worked on at one time? 

d. On some of the data sheets, an answer would be returned before it was 
approved. When does this occur? 

e. Do you obtain approval for all questions before you return an answer to the 
caller? 

f What questions do you feel are the easiest to answer? 

g. What questions do you feel are the most difficult to answer? 



196 



197 



h. What do you do after the question has been answered? 

i. How long does it take for you to "close-out" a question after you have 

returned an answer? 
j. How long did it take you to become comfortable answering questions posed by 

callers? 
k. Describe for me the use of the bulletin board? When do you use it and when 

do you hold on to the data/call sheet? 

Once you have completed a question, how do you decide which question to work on next? 

A dditional follow-up questions: 

a. Do some types of questions receive a higher priority than other types? 

b. Are answered first because they are easier or more important? 

c. Do you sometimes select questions because of interest rather than importance? 

d. Do questions gain higher importance because of the requested response time? 

e. Do questions gain higher importance because they are a pharmacist, physician, 
etc.? 

f Do questions gain higher importance because they are a health center employee 
or a subscriber? 



///. Consumer Expectations and Perceptions 

1 . What do you think is the most valued aspect of the service you provide? 

2. What do you think is the least valued aspect of the service you provide? 

3 . Have any callers seemed dissatisfied with a response you have given? Why? 

4. What factors do you think most influence a caller's perception of the quality of the 
service you provide at the DIPRC? 

5. How sensitive do you feel that the callers are to the amount of time it takes to answer 
a question. 

6. On your sheets, you ask the consumer to specify by when they need the question 
answered. How sensitive do you feel callers are to delays in this time frame? 

IV. Suggestions for Improvement 

1 . What did you like most about working at the DIPRC? 

2. What did you like least about working at the DIPRC? 

3. Is there anything that you need or that would make your job easier that is not provided 
by the DIPRC? 

4. Are there any suggestions you have for improving the process at the DIPRC? 



APPENDIX K 
SEMI-STRUCTIMED OUTLINE FOR CO-DIRECTOR INTERVIEWS 

Introduction 

1 . Explain the purpose of the interview and describe the issues that will be discussed. 

2. Explain that their names will never be attached to anything said during the 
interview. 

3 . Ask permission to audio record the interview for recollection purposes. 

//. Understanding the Service System 

Please describe your job as it relates to the DIPRC. 

Additional follow-up questions: 

a. Besides what we have talked about above, what other activities occupy your 
time during the week (e.g. lunch, meetings)? 

b. What hours is the DIPRC open to take calls? 



I would like you do describe the work process here at the DIPRC. Explain to me what 
happens from the arrival of a question until an answer is returned to the caller. 

Additional follow-up questions: 

a. Who works in the DIRPC? What are their roles/responsibilities? 

b. Are the students supposed to fill out a sheet for each question asked? 

c. Are the students supposed to answer the phone if they are already working on 
a question and are in the office? 

d. What is the suggested work process? How well do the students follow this 
procedure? 

e. Do you encourage or discourage the students to work on more than one 
question at a time? 

f On some of the data sheets, an answer would be returned before it was 

approved. When does this occur? 
g. Is the student supposed to obtain approval for all questions before you return 

an answer to the caller? 
h. Describe how a question is approved process. 

i. What questions do you think the students find are the easiest to answer? 
j. What questions do you think the students find are the most difficult to answer? 



198 



199 



k. How long does it take for the students to become comfortable answering 

questions posed by callers? 
1. Describe the use of the bulletin board? When do you use it and when do you 

hold on to the data/call sheet? 



Once you have completed a question, how does the student decide which question to 
work on next? 

Additional follow-up questions: 

a. Do some types of questions receive a higher priority than other types? 

b. Are some questions answered first because they are easier or more 
important? 

c. Do students sometimes select questions because of interest rather than 
importance? 

d. Do questions gain higher importance because of the requested response 
time? 

e. Do questions gain higher importance because they are a pharmacist, 
physician, etc.? 

f Do questions gain higher importance because they are a health center 
employee or a subscriber? 



///. Consumer Expectations and Perceptions 

1 . What do you think is the most valued aspect of the service you provide? 

2. What do you think is the least valued aspect of the service you provide? 

3. What factors do you think most influence a caller's perception of the quality of the 
service you provide at the DIPRC? 

4. How sensitive do you feel that the callers are to the amount of time it takes to answer 
a question. 

5. On your sheets, you ask the consumer to specify by when they need the question 
answered. How sensitive do you feel callers are to delays in this time frame? 

IV. Future of the DIPRC 

1 . Where do you think the DIPRC is headed for the next year? The next five years? 

2. What is the viability of getting more rotation students if you needed an increase in 
service capacity? Would you need to hire pharmacist/residents to answer questions? 

3. How fast are the pharmacists/residents when compared to the Pharm.D. students? 



APPENDIX L 
TEXT OF COVER LETTER FOR MAIN QUESTIONNAIRE 



July 30, 1997 



Dear Colleague, 

The Drug Information and Pharmacy Research Center (DIPRC) at Shands at the 
Umversity of Florida is currently working toward improving the quality of services we 
provide. In order to achieve this goal, we have decided to ask our most recent callers 
some questions regarding specific aspects of our service. 

You can help us by participating in a brief survey concerning your recent experience(s) 
with the DIPRC. Your prompt response is very important to us. You are one of only a 
small number of practitioners who are being asked to give their opinions about our 
service, so it is critical that each questionnaire is completed and returned. 

Please answer all of the questions in this questionnaire (it should only take between 5-10 
minutes to complete) and place it in the preaddressed, postage paid envelope provided. 
You may be assured of complete confidentiality. The questionnaire has an identification 
number for mailing purposes only. This is so we may check your name off the mailing list 
when your questionnaire is returned. Your name will never be placed on the questionnaire 
itself, nor will your responses be linked to you personally during the analyses. 

Thank you for your participation. 
Sincerely, 



Randy C. Hatton, Pharm.D.,BCPS 
Co-Director, DIPRC 
Clinical Professor 



Daniel L. Halberg 

Doctoral Candidate, University of Florida 



200 



APPENDIX M 
MAIN QUESTIONAIRE 



Page 1 

DIRECTIONS: Please answer the questions to the best of your ability by placing a 
checkmark in ONE of the boxes next to each question. There are four sets of 
questions: (1 ) some basic information about you; (2) your feelings about the Drug 
Information and Pharmacy Resource Center (DIPRC) at Shands at the University 
of Florida; (3) your perceptions regarding the amount of time the DIPRC took to 
fulfill your request; and, (4) your overall feelings regarding the DIPRC and your 
impressions about future behaviors regarding the DIPRC. 

PART I: The following two questions gather some information about you. This 
information will be used in conjunction with the information given in the rest of 
the questionnaire to assess how needs and perceptions differ among our callers. 



(1 ) What Is your profession? 

□ RPh/Pharm.D. LI Physician □ Nurse/Nurse Practitioner □ Other: 



(2) Are you a subscriber, a non-subscriber, or a Shands Health Center employee? 

□ Subscriber U Non-Subscriber Q Health Center Employee 



(3) How often do you use the DIPRC? 



Q First time user 
□ 1-2 times per year 



G 3-S times per year 
Q 5-10 times per year 



IJ 10-15 times per year 

Q more than 15 times per year 



PART II: The following set of statements relate to your feelings about the Drug 
Information and Pharmacy Resource Center (DIPRC) at Shands at the University 
of Florida. For each statement, please check the box that best describes the 
extent to which you believe the DIPRC has that characteristic. The range of 
selection varies from "Strongly Agree" to "Strongly Disagree"; however, you may 
check any of the boxes provided. If you feel that you cannot answer a question, 
or that the question does not apply to you, you may check the box labelled "Don't 
Know". 



(4) 



The DIPRC has the equipment and 
information resources necessary to 
answer my questions 



(5) When I call the DIPRC, background noise 
on their end interferes with my ability to 
communicate over the telephone 















(((((( 

Q □ □ Q Q 



□ □ U Q U □ 



O^ 






r 



(6) When I receive written materials from the 
DIPRC, they are clear and easy to read . 



Q □ Q Q □ Q 3 Q 



(7) Employees of the DIPRC speak in a 
manner that is easy to understand . 



□ □ Q Q Q 



D Q 



(OVER) 



202 



•1 



203 



Page 2 

/> zv /> v> 

^^.^ ^ 0,0^ ^^'^ ^,0"^ <JN^ ^y' </^ 

(8) When the DIPRC promises to do I I I 1 I I I I 
something by a certain time, it does so. QQQQQQQQ 

(9) When I have a problem, the DIPRC is 
sympathetic and reassuring ij ij ij \j □ □ q q 

(10) The DIPRC is dependable □ q q □ □ q q q 

(11) The DIPRC provides its services in the 
time it promises U u u [j q q q q 

(12) The DIPRC does not give me individual 
attention □ □ q q q q q q 

(13) The DIPRC does not tell me exactly when 
services will be performed ^ ^ □ □ □ j q q 

(14) I do not receive prompt service from 
DIPRC employees j ^ ij ij u _, y u 

(1 5) Employees of the DIPRC are not always 
willing to help me jDQaQjZia 

(16) Employees of the DIPRC are too busy to 

,! respond to caller requests promptly ... QQQGQQDQ 



(17) I can trust employees of the DIPRC .... QQQUQQjDQ 

(18) Employees of the DIPRC do not give me 

personal attention q q q q q q q q 

(19) Employees of the DIPRC are polite .... aQQQQQQij 

(20) I feel safe in my interactions with the 

DIPRC employees q q 3 q q q q q 

(21) Employees of the DIPRC do not know 

what my needs are □ □ j q q q q q 



204 



Page 3 



/ A /-V /V 

c^ ^4 ^d> ^^ c,o^ ^^ ^ ^o^ 

(22) The DiPRC does not have my best I I I I I I I I 
interests at heart □ □ q q q q q q 

(23) The DIPRC does not have operating 

hours convenient to me □ □ □ □ □ □ q □ 



PART III: The follow^ing questions regard your perceptions about the length of 
time in which the service was rendered. Please think about the next four items in 
terms of the last question you presented to the DIPRC, and react to the 
statements below using the scale provided. Again, you may check any of the 
boxes on the scale to show how strong your feelings are. 

J- ./ /\x 

^ ^ «^ K V^ ./V«S^ 



(24) The amount of time that it took the 
DIPRC to respond to my most recent 



4> ^ cif ^^ c,o^ <f ^ <:f 






question was acceptable □ □ u [j □ □ □ □ 

(25) By the time I received a response from 
the DIPRC, the information was no longer 
useful to me Q q q q q q q Zi 

(26) I wish the DIPRC could provide a quicker 
response to my questions ij \j lj U U U LI J 

(27) The amount of time that it took the DIPRC to respond to my most recent question 
was 

H MUCH SHORTER THAN I EXPECTED 

U SHORTER THAN i EXPECTED 

D A LITTLE SHORTER THAN 1 EXPECTED 

H EQUAL TO WHAT I EXPECTED 

3 A LITTLE LONGER THAN I EXPECTED 

3 LONGER THAN I EXPECTED 

3 MUCH LONGER THAN I EXPECTED 



(OVER) 



205 



Page 4 

PART IV: The following seven statements relate to your overall feelings about the 
DIPRC. Please respond by checking the box which best reflects your own 
perceptions. 

(28) The overall quality of the services provided by the DIPRC is best described as: 

H Excellent Q Very Good Q Good □ Fair □ Poor IJ Unacceptable 



/. y> ./.«^^ y.^^^ 



(29) The responses I receive from the DIPRC I I 1 I I I I 1 
are useful to me in my practice □ [j j ij u LI U U 

(30) The responses I receive from the DIPRC 

are essential to me in my practice □QQQIiaaQ 

(31) It is important that the DIPRC mail and/or 
fax me the supporting documents (e.g., 
recent literature) for their answers to my 

questions Q Q □ □ 3 Q Q □ 

(32) The DlPRC's answers to my questions 

are used to improve patient outcomes . . □ Q G Q J a Q G 

(33) 1 intend to use this service in the future. QQGQQQQQ 

(34) I would recommend this service to a 

colleague G □ G G G Q □ Q 



(35) ADDITIONAL COMMENTS: Is there anything else that you would like to tell us about 
your experience(s) with the DIPRC? Also, any comments you wish to make 
regarding how we could improve our service will be appreciated, either here or in a 
separate letter. 



Thank you very much for your help. 



APPENDIX N 
FOLLOWUP POST CARD FOR MAIN QUESTIONNAIRE 



SHANDS 

attheUniversitvof Florida 



UNIVERSITY OF 

^'FLORIDA 



Drug Information & Pharmacy Resource Center 
PO Box 1003 16 'Gainesville. FL 32610-0356 



(Side One) 



Dear Colleague: 

About one week ago, a questinnuaire seeking your opinions ahnnt tlie service quality of the nrug 
Information and Pharmacy Resource Center (DIPRC) was mailed to you. 

If you have already completed and returned the questiooualre to us please accept our sincere 
thanks. If not, please do so today. Because the questionnaire was sent to only a small sample of 
our recent callers it is extremely important that yours al.so be included iu the study if the results 
are to accurately represent the feelings of the professionals we serve. 

If by some chance you did not receive the questionnaire, or it got misplaced, please call my office at 
(352) 392-9035 or e-mail me at HAT.BKRG@COP3.HEALTH.UFI..EDU and I will get another one 
in the mail to you. Your contribution to the success of this study is greatly appreciated. 

Sincerely, 



Daniel L. Malberg 
Doctoral Candidate 



(Side Two) 



206 



APPENDIX O 
RESPONSES TO MAIN QUESTIONAIRE 



Main Questionnaire Responses 





Strongly 




Somewhat 




Somewhat 




Strongly 


Don't 


No 


Response 


Agree 


Agree 


Agree 


Neutral 


Disagree 


Disagree 


Disagree 


Know 


Response 


Q-1 


131 


13 


21 


37 














1 


Q^ 


7e 


67 


49 

















8 


Q-3 


49 


26 


46 


40 


17 


23 








2 


Q<4 


90 


84 


5 


6 


2 








13 


3 


Q-i 


4 


3 


2 


16 


9 


99 


64 


4 


2 


Q-e 


49 


88 


11 


16 








2 


35 


2 


Q-7 


71 


92 


22 


2 


4 


5 


2 


2 


3 


Q-8 


68 


98 


27 


4 


2 








3 


1 


Q-9 


38 


70 


18 


37 


2 








36 


2 


Q-IO 


74 


110 


10 


2 


1 


1 





4 


1 


Q-11 


67 


100 


22 


4 


2 


1 





6 


1 


Q-12 





5 





9 


5 


91 


85 


7 


1 


Q-1 3 





5 


10 


16 


6 


91 


63 


11 


1 


Q-14 


4 


1 


3 


7 


10 


90 


84 


1 


3 


Q-15 





2 


1 


2 


6 


84 


103 


3 


2 


Q-1 6 





3 


8 


10 


6 


87 


78 


9 


2 


Q-1 7 


69 


90 


12 


9 





4 


5 


13 


1 


Q-18 





6 





3 


3 


90 


95 


5 


1 


Q-1 9 


88 


103 


5 





1 


1 


2 


2 


1 


Q-20 


69 


107 


8 


6 





1 


2 


7 


3 


Q-21 


2 


5 


7 


18 


12 


89 


58 


11 


1 


Q.22 





1 


2 


6 


4 


93 


84 


11 


2 


Q-2Z 





4 


8 


22 


12 


92 


43 


21 


1 


Q-24 


73 


94 


19 


6 


4 


3 


2 


1 


1 


Q-2S 


1 


8 


11 


8 


16 


85 


72 


1 


1 


Q-26 


10 


24 


37 


44 


12 


46 


24 


3 


3 


Q.27 


17 


35 


17 


88 


32 


10 


1 





3 


Q-28 


80 


86 


26 





1 











10 


Q-29 


92 


92 


8 


5 


1 








3 


2 


Q-30 


SO 


72 


38 


30 


6 


2 





3 


2 


Q-31 


69 


64 


28 


23 


3 


6 


2 


6 


2 


Q-32 


85 


76 


15 


13 





1 


2 


8 


3 


Q-33 


120 


71 


6 


1 





1 





2 


2 


Q-34 


129 


63 


4 


1 











2 


4 



207 



APPENDIX P 
MAIN QUESTIONNAIRE WRITTEN COMMENTS 



Question 4: The DIPRC has the equipment and information resources necessary to 
answer my questions. 

"[NJever been there to see." 

"Have neither personally seem nor utilized equipment/info. Resources but had questions 
answered satisfactorily." 



Question 5: When I call the DIPRC, background noise on their end interferes with my 
ability to communicate over the telephone. 

"I never call" (From a physician - presumably they have their office assistant call for 
them.) 

Question 6: When I receive written materials from the DIPRC, they are clear and easy 
to read 

"No experience." 

"N/A" 

"But that's O.K." (In response to an answer of 'Somewhat Agree') 



Question 7: Employees of the DIPRC speak in a manner that is easy to understand 

"Some employees (from Asia or Middle East) have more of a problem." 

"Variable depending on student." 

"I never call" (From a physician - presumably they have their office assistant call for 
them.) 

"On one occasion the person answering phone was very hard to understand due to 
accent." 



208 



209 



"The person I spoke with was difficult to understand, but that does not mean everyone is 
like that." 

"The occassional student with ethnic accents can be difficult. (Asian, etc. . .)" 

"Foreign students struggle a little more, understandably." 

Question 9: When I have a problem, the DIPRC is sympathetic and reassuring. 

"[N]ever had one." 

"I am not looking for sympathy or reassurance!?" 

"N/A" 

Question 10: The DIPRC is dependable. 

"Outstanding." (Written above an 'Excellent' grade on the question.) 

Question 1 7: I can trust employees of the DIPRC. 

"[C]ome, now!" (Apparently relating to the relevance of the item) 

Question 19: Employee of the DIPRC are polite. 

"Very. ©" 

Question 20: I feel safe in my interactions with the DIPRC employees. 

"[DJepending on the students there that month." 
"When I can understand what they are saying." 

"N/A" 

Question 21: Employees of the DIPRC do not know what my needs are. 

"How could they if I don't tell them?" 



210 



"Regarding question #2 1 - The staff member who took the call and interpreted my 
question did not have any idea of the subject of my inquiry - PPD and anergy testing - 1 
step vs. 2 step methods for PPD; I needed to explain and spell [out] the info, for her - 1 
wish you could have experienced pharmacists processing the incoming questions." 

"Question need clarifying. Do you mean 'Understand my requests for specific info' or 
need relating to a timely response?" 

"Are you trying to trick me?" 

"Of course they don't until I tell them." 

Question 23: The DIPRC does not having operating hours convenient to me. 

"Do not know your exact hours. But [they have] been available when I've needed them." 

"Do not know hours of operation." 

"I'm open week ends and central standard time [until] 6 PM." (In comment to a 
'Somewhat Agree' grade on the question." 



Question 24: The amount of time that it took the DIPRC to respond to my most recent 
question was acceptable. 

"As was the response time on 9 out of [the] last 10 requests much more than adequate 
(amazing even)." 



Question 26: I wish the DIPRC could provide a quicker response to my questions. 

"[0]nly in one often cases." 

"Sometimes you need info, quick, other times it's not important." 

"I have not had a problem in this regard." 

"Sometimes, as it was in my case, that is impossible." 



211 



Question 27: The amount of time that it took the DIPRC to respond to my most recent 
question was. 

"I called because information was not available in my current library." (Commenting on a 
'longer than expected' response.) 

"I don't know how much research was involved." 

"[A]nd exactly when they told me." 

"P.S. I also contacted the Life Extension Foundation and received an answer within 5 
minutes, because one of their staff knew the answer. However, I was not dissatisfied with 
the DIPRC response, since it involved medication not available in the U.S." 

Question 28: The overall quality of the services provided by the DIPRC is best 
described as: 

(The "Excellenf response was checked and double underlined) (2) 
"Outstanding." (Written above an 'Excellent' grade on the question.) 

Questions 29: The responses I receive from the DIPRC are useful to me in my 
practice. 

"N/A" 

Question 30: The responses I receive from the DIPRC are essential to me in my 
practice. 

"N/A" 

Question 31: It is important that the DIPRC and/or fax me the supporting document 
(e.g., recent literature) for their answers to my questions. 

"Fax." 

"Sometimes, but not always." 

"Depends on the question not the same for all." 

"[T]hey did provide great [information] on our pharm. drug for a sedation workshop." 



212 



"N/A" 



Question 32: The DIPRC's answers to my questions are used to improve patient 
outcomes. 

"N/A" (2) 

"Unless I request not to bother which will be at least Vi the time." 

"[and] nurses responses to med questions." 

Question 34: I would recommend this service to a colleague. 

(Respondent marked 'highly' above recommend, indicated a 'highly recommend') 
"I have done so on numerous occasions." 
"Most definitely." 



Question 35: Additional Comments: Is there anything else that you would like to tell 
us about your experience(s) with the DIPRC? Also, any comments you wish to make 
regarding how we could improve our service will be appreciated, either here or in a 
separate letter. 

"I am grateflil that this service is available as it saves me time and helps my patients. 
Thank you." 

"I always find the staflf receptive and extremely helpfial. Results are accurate and assist me 
a great deal in my forensic identifications. I greatly appreciate the fast response time 
necessary for the type of work I perform. Thank you." 

"Are you guys on the world wide web? If so, maybe make your e-mail address available." 

"For a first time caller to your Drug Information Service I am very pleased and appreciate 
your services. My customer was from out of the country and was quite impressed with 
the info, we shared with him. Keep up the good work!" 

"Your service is provided with efficiency and effectiveness that is without competition 
from any other resource. Congratulations and thank you for your effort." 



2i: 



"At times I think the student answering the [telephone] does not understand the question 
(or cannot understand English) They could improve their communication skills (i.e., listen 
better, restate the question, etc.)" 

"DIPRC has always been very helpful with all of my questions. The questions I ask are 
usually very difficuh and obscure and are referred to DIPRC when I can't find anything 
with a basic Medline search and resources I have available to me." 

"The last time I used DIPRC was for information about Redux and its possible affect on a 
patient's hair loss. I did receive a phone call back but they were also to mail some [kind] 
of survey that wasn't received." 

"I feel your services have always been invaluable to my practice with improved patient 
outcomes." 

"Keep up the good work!" 

"The last interaction and request was handled extremely well and information received was 
very necessary and helped us immensely. We have however not received this kind of 
service in the past. So many of my responses were tempered by past interactions. The 
last request could not have been handled more appropriately or professionally." 

"Hey Dr. Hatton and Professor Doering! You guys are awesome." 

"Most of these questions do not apply to my situation/question. Therefore, my answers 
may not help." 

"I do not beheve that I have used DIPRC in the past." (In response to a blank 
questionnaire.) 

"Long response time to medical urgencies is not useful to me - need to rank urgency of 
calls and establish priorities for meeting requests. Also, staff need to be familiar with 
names and spellings of drugs in order to assist callers." 

"Although this is probably impossible. . .it would be great if the DIS staffed students who 
were nearing the end of their clinical rotations rather than the beginning. Sometimes 
students don't understand even the most basic questions due to cHnical inexperience. 
Although they do eventually come through after being helped by Paul and/or Randy I'm 
sure." 

"Great job - We need you guys." 

"Good job, keep it up." 



214 

"Thank you for helping me to 'look more knowledgeable' to other health care 
professionals and patients. Your behind the scenes work is greatly appreciated." 

"DIRPC is a SUPER important resource for my pharmaceutical questions." 

"1 am a very satisfied first time user, my coworkers have always been very satisfied with 
the service." 

"Keep up the great work! Your accessibility and willingness to help is most appreciated!" 

"The drug information service personnel have always been polite. They have always 
contacted me in a timely fashion. If the available resources don't have the information I 
need, the personnel go 'the extra mile' and contact drug manufacturers etc., to help me 
get the information I need. My colleagues and I appreciate all the hard work!" 

"Please allow the responder or ask responder of DIPRC [to] identify him/herself whenever 
he/she responds to a call. Sometimes caller finds himself very awkward in asking the new 
responder who the first one was and all the good intention was lost." 

"Go Gators." 

"They are not a resource in pentineal dialysis drugs and this would be helpful." 

"I think it would be helpfiil if the DIRPC stafif asked what sort of documentation is needed 
- 1 have had many positive interactions with the DIPRC, but had one instance in which I 
was provided with no supporting documents when I expected some." 

"Unfortunately the DIPRC was unable to provide me the very specific information I 
requested and I was forced to request it directly from the drug company. Admittedly, the 
info. I requested was detailed and new regarding latex content of many drugs. Thanks." 

"As a forensic chemist, working for the State of Florida (FDLE) they survey does not 
really apply - but I am gratefial for the service. I would be willing to supply information 
not in your computers as to the identity of tablets, etc." 

"The only problem I have ever had was the inability to understand the person answering 
the phone and getting him to understand me. This only occurred once. This was due to 
language barrier." 

"I have had very positive experiences." 

"I am a deputy sheriff and used your service to identify some prescription drugs I 
confiscated off a[n] arrestee. Your service was polite, prompt and very helpful." 



215 

"I really like the feature of having DIPRC Fax the answers to the Dr's/Rn's/ARNP's, etc. 
especially when time is of the essence. Occasionally I would like a synopsis of the answer 
to the query for my own info. But probably less than V2 the time." 

"I wonder if it would be possible to get extemporaneous compounding help [through] the 
DIRPC on a fairly rapid timetable? 1 got used to having access to the help desk at 
P. CCA. out in Texas while purchasing enough from them to warrant the service for 2 
years recently." 

"You have been great - You are great - and I expect you shall continue being great - and 
we thank you ." 

"I appreciate that, while I'm not a subscriber, I receive the same courteous, considerate, 
and prompt attention to my questions as a subscriber." 

"Being that this was the first time I used your service, I do not know if what I have to say 
is a fair assessment. With that out of the way, it took 24 hours to get a response to my 
question. That really was not a problem for me in this particular situation; however, I can 
see how that much time would be frustrating in other situations, especially if that is a 
typical response time. But I think that is a great service you offer. Thanks! ©" 

"Found the service useful. Follow up info, on PRO-FIBE was sent out to me rather 
quickly, and I appreciate it." 

"The DIPRC offers extremely reliable and prompt service! Keep up the good work!" 

"DIPRC provides an excellent experience for clinical students. Randy and Paul know just 
about everything or know where to look it up!" 

"Need to make shorter questionnaire!" 

"I enjoy this service. Thank you." 

"DIRPC is a wonderful resource and I really appreciate it's availability to health care 
practitioners." 

"It would be great to have this type of service of other than nutrient drug related research. 
But your folks are doing great, thank you!" 

"The two ladies that helped me were considerate, knowledgeable, interested, and 
efficient." 

"Outstanding response time." 



216 



"Most frequently my needs are related to cost and enough information about a drug to 
justify it's use to an insurance company. A negative outcome can mean that a patient's 
treatment plan could/would be changed to a less optimal course of treatment " 

"I appreciated the information received but it was much more technical than I wanted. It 
would be helpful to have a broader range of references. The person I spoke with was very 
willing to help." 

"Questions/Responses via electronic mail, or a Web site set-up would be a nice addition!" 
"Never used U.F." 

"I have been impressed with all aspects of assistance, courtesy and timeliness. [Name 
omitted] was very helpful." 

"Great Job!! Please continue. Thanks." 

"Keep up the good work." 

"Sometimes it is difficult to understand the DIPRC students who answer the phone and 
they require repeating the request several times before they understand it." 

"If caller could be assisted in prioritizing questions appropriately somehow, workload 
could be evened out. i.e., is patient bleeding? , or in an exam room? or won't be until 
next week's appointment." 

"I'm a caller from another DIG inquiring about formulary status of fosphenytoin. 
Information presented was concise, accurate, and timely." 

"No problems with DIRPC. I do my own initial research into problems; when exhausted, 
I use DIPRC as 'last resort' to solve problem. Given that scenario, any answer to a 
problem is welcome." 

"This is a wonderful service you provide. Is the service Umited to just UF or is it available 
county wide? I never know you existed as a resource! Thanks so much!" 

"Response time was fantastic and most beneficial." 

"The employee was very helpfiil and willing to look for answers." 

"Thanks for the wonderful-clear-concise info, on the drugs we requested! [Name omitted] 
was very helpful!" 

"Allow for submission of questions via e-mail." 



217 



"Don't make subscribers give you our demographic data each time we call. You already 
have it and should be able to access it." 

"Your services are extremely useful for questions which do not require an immediate 
answer. However, for questions in which a physician is waiting on an answer A.S.A.P., 
the turnaround time is too long for the service to be useful. As a community hospital with 
no attached health center, finding literature can be difficuh. It would be helpful it there 
was a system in place to answer more urgent questions quickly." 

"Upon the advice of a colleague, I called the DIPRC. I have only used the service once, 
therefore it is difficult to judge the overall service. I will say that when I did call the 
service was helpful and I found out what I needed to know in a short period of time. I 
would definitely use the service again if the need arises." 

"Have more experienced people available if at all possible. From initial conversations, I 
sometimes get the idea the person is very uninformed about clinically relevant issues - at 
other times just the opposite." 

"The DIPRC is an invaluable resource for practicing pharmacists. Thank you for the 
assistance you provide!" 

"This was my first experience with using DIPRC - <name omitted> was very helpfiil - 
problem was researched and answered efficiently/promptly - Thank you." 

"I have had trouble understanding some of the personnel on various occasions. They try 
hard, but it is a little frustrating sometimes." 

"Internet access to e-mail requests would be usefial." 

"I have never had a bad experience with the DIPRC. All questions have been answered in 
a very timely fashion. Some questions are difficult and have no answers. Receiving that 
info is also good and reassuring to professionals who reached their own dead ends. Thank 
you for all your help." 

"The U of F DIPRC was initiated by M. Peter Prevonka when a question arose about a 
decent and factual information given in a timely manner. I have not been disappointed. 
All health care professionals had a need for current, practical answers to questions. More 
information has come our way in the last 8 years than in the previous 200 years. Ask 
Oscar Araujo as a senior advisor and precious resource." 

"[The] person who helped me was very polite and helpful. ©" 

"The problem with the slow turn around time of my last question was the result of one 
new student's lack of experience. The question was more of a confirmation of information 
and the turnaround time was not critical." 



218 



"Your help was great in coding the durg. 'New' street drugs are not always in our encoder 
or ICD-9-CM coding books. Thank for you help." 

"I have had an experience that could have resulted in a more positive outcome. 
Apparently, a pharmacy intern researched a question I had and provided what I found to 
be a superficial treatment. Many questions only need such treatment some need more. 
Consider giving the caller the choice in having [information] prepared by a student or a 
practitioner." 

"Courteous, prompt, interested in requests." 

"Service is very much appreciated and have recommended it to others. 

"Very much appreciated." 

"Another drug information service did a 'bash the pharmacist' response and never 
answered the question. " 

"You might consider a membership in the Life Extension Foundation which monitors 
research done in other countries, as well as leading edge research in the U.S. 1-800-841- 
5433." 

"Shortened turn-around time would be great." 

"I personally find the DIPRC to be extremely useful especially working as a retail 
pharmacist whereby our resources are very limited and it is hard to answer patients' 
questions and concerns in a timely fashion. But thanks to the DIRPC, their assistance in 
the past have proven very valuable. " 



APPENDIX Q 
SIMULATION BLOCK DIAGRAMS 




ASSIGN 



TRANSFEfK,^ 
.CALLER/*" 



^NO 



219 



220 




221 




QUEUE 



QUEUE 



QUEUE 





ADVANCE 
FN(TCMIN) 




DEPART 




ANSWERQ 









LOGIC 
R 


PI 










222 




223 




224 




APPENDIX R 
SIMULATION PROGRAM CODE 



GPSS/H PROFESSiaiAL l«Lt'<aSE 3.0n-<n0 (UG207) 



Get 1997 23:28:54 



FILE: dls.gps 



LINES IF DO BLCXK# 

1 

2 

3 

4 

5 

6 

7 

8 

9 
10 
11 
12 
13 
14 
15 
16 
17 
18 
19 
20 
21 
22 
23 
24 
25 
26 
27 
28 
29 
30 
31 
32 
33 
34 
35 
36 
37 
38 
39 
40 
41 
42 
43 



COMMENTS 



45 
46 
47 



50 
51 
52 
53 
54 
55 
55 
57 
58 



*LCC OPERATICNA,B,C,D,E,F,G 
*234b678904-2345678901+ 

REALLCCOTE CCM, 300000 
SIMLHOTE 

* INITIALIZE NECESSARY STORAGES, WKLPBIES, AND HJNCriOJS 



Start GPSS/H Job 



■ -i- -J.- -k -k A- -A- ± ^- -k- i- ± -A- -I- i- -k- -k -k k k 



INTEGER 

INTEGER 

REAL 

REAL 

REAL 

TOTALQ EQJ 
ANSWERQ EOJ 
BCRREQ EQJ 
SERVICEQ EgLJ 
RETURNQ Egj 
WAITSVQ Egj 

LINES STCRAGE 

STUD V7\RIABLE 



&I, &m, &SV, &D, &STD, &a, 

S.SVC, &AMC, &STMG, SLOGT 
&CIi<M:N, iAVAR, SDA, STA, S.HCUR 
&AVIS, &DLCT, 5,DLPC, &PM, SSTM 
&SVPF, UPSAJI, &TQ^, STOP, SBQM, 5,BQT 

10,Q 
11, Q 
12, Q 
13,Q 
14, Q 
15, Q 



4 



(RN3*&SV/1000)+1 



DEIAYI EWARIABLE ( 1C1-P7 )>15)+ (&D-P8)>=1 
DE1AY2 BVARIABLE (s;ChP8)>=1 
CEIAY3 BVARIABLE (6.EHPG)>=3 

SETTYP EUNCTICN P2,D4 
l,QrYPE4/3,QIYPE3/4,QTYPE2/8,aTYPEl 

BUSY EUNCTICN RN2,C2 

0,5/1,10 

REMCHK EUNCnCN RN6,C2 
0,0/1,59 

ENTSVC EUNCTIOM P5, 04 
2,TIMV4,TCNR/6,TCNR/8,STAT 



QTYPEl EUNCTICM RN4,D3 
.483,1/. 830, 2/1, 3 

QrYPE2 EUNCTICN RN4,D3 
.360,1/. 743,2/1, 3 

QTYPES EUNCTICN RN4,D3 
.215,1/. 572, 2/1, 3 

QTYPE4 EUNCTICN RN4,D3 
.278,1/. 658, 2/1, 3 

gjEST9 EUNCTICN RN5,D4 
.120,8/. 681, 6/. 847, 4/1, 2 

OESTIO EUNCTICN RN5,D4 



Initialize Variables 



Initialize (Xieue Nuirbers 



Set Number of Phone Lines 

Variable: Student Selection 

ttoolean Variable: STAT Delay 
Etoolean Variable: Today Delay 
BDolean Variable: Date Delay 

Routing Function: Question Type 
Function: Wait 5-10 Minutes 
Eljnction: Qieck once per hour 
Routing Eljnction: Etegin Answering 



QT;,pe Generator l=Group l/2=Group 2/3=Group 3 
for Stat 0-iestions 

QType Generator l=Group l/2=Group 2/3=Group 3 
for Today Questions 

QType Generator l=Group l/2=Group 2/3=Group 3 
for Dated Questions 

QType Generator l=Group l/2=Group 2/3=Group 3 
for No Rush Questions 

Function: Priorities for 9:00-10:00 



Function: Priorities for 10:00-11:00 



225 



226 



59 
60 
61 
62 
63 
64 
65 



.110,8/. 690,6/. 855,4/1, 2 

QJESTll aiNCTICN RN5,D4 
.100, 8/. 562, 6/. 863, 4/1, 2 

QUEST12 FUNCriOJ RN5,D4 
.100, 8/. 685, 6/. 842, 4/1, 2 



Elinction: Priorities for ] 1:00-12:00 



Rjnction: Priorities for 12:00-1:00 



67 



gjESTlS EUNCTICN RN5, D4 
.110,8/. 508, 6/. 841, 4/1, 2 



Eljnction: Priorities for 1:00-2:00 



70 
71 
72 
73 
74 
75 
76 
77 
78 
79 
80 
81 
82 
83 
84 
85 



gjEET14 EUNCTICN RN5,D4 
.110, 8/. ,572, 6/. 863, 4/1, 2 

QLEST15 EUNCTICN RN5,D4 
.140, 8/. 498, 6/. 817, 4/. 998, 2 

gjESTie EUNCTICN RN5, D4 
.100,8/. 320,6/. 822, 4/1, 2 

IAT9 EUNCTICN RN7,C6 

0, 1/ . 403, 10/ . 701, 20/ . 830, 30/ . 959, 40/1, 65 

lATlO EUNCTICN RN8,C10 
0,1/. 251,10/. 495, 20/. 641, 30/. 760, 40 
. 844, 50/ . 903, 60/ . 947, 70/ . 981, 80 

1,110 



EUnctlon: Priorities for 2:00-3:00 



Function: Priorities for 3:00-4:00 



Etinction: Priorities for 4:00-5:00 



EUnction: lATs for 9-10 



Function: lATs for 10-11 



87 



90 

91 

92 

93 

94 

95 

96 

97 

98 

99 

100 

lOI 

102 

103 

104 

105 

106 

107 

108 

109 

110 

111 

112 

113 

114 

115 

116 

117 

118 

119 

120 

121 

122 

123 

124 

125 

126 

127 

128 

129 

130 

131 

132 

133 



mill EUNCTICN [^9,C11 
0,1/. 167, 10/. 375,20/. 561, 30/. 682, 40 
. 750, 50/ . 854, 60/ . 893, 70/ . 957, 90 
.979,110/1,170 

IAT12 FUNCT^OJ RN10,C14 
0,1/. 188, 10/. 387, 20/. 491, 30/. 642, 40 
. 703, 50/ . 768, 60/ . 816, 70/ . 868, 80 
. 892, 90/ . 925, 100/ . 957, 110/ . 976, 120 

1,190 

lATAFT EUNCTICN RN11,C15 
0,1/. 149, 10/. 325,20/. 425,30/. 542, 40 
.619, 50/. 675, 60/. 734, 70/. 786, 80 
. 823, 90/ . 861, 100/ . 919, 130/ . 958, 180 
.986,200/1,290 

GRCUPl EUNCTICN RN12,C13 
0, 0/ . 213, 10/ . 394 , 20/ . 504 , 30/ . 567 , 40 
.551, 50/. 709, 60/. 764, 90/. 811, 120 
. 865, 150/ . 906, 180/ . 945, 250/1, 310 

GRCUP2 EUNCTICN ™i3,C16 
0, 0/ . 079, 10/ . 191, 20/ . 322, 30/ . 428, 40 
.480, 50/. 533, 60/. 605, 70/. 651, 80 
. 697, 100/ . 776, 130/ . 829, 160/ . 888, 210 
.954, 260/. 980, 300/1, 530 

GRCUP3 FUNCTION RN14,C20 
0, 0/ . 01, 10/ . 12, 20/ . 19, 30/ . 25, 40 
.30,50/. 38, 50/. 41, 70/. 49, 80 
. 56, 90/ . 63, 100/ . 68, 110/ . 72, 120 
.76,140/. 80,160/. 86, 190/. 90,210 
.93, 290/. 95, 320/1, 470 

STATIM EUNCTICN RN15,C15 
0,0/. 12, 5/. 255,10/. 490,15/. 627, 20 
.706, 25/. 725,30/. 784, 35/. 804, 40 
. 862, 45/ . 901, 50/ . 921, 60/ . 941, 65 
.961,75/1,80 

TCMIN EUNCTICN RN16,C6 
0, 0/ . 683, 5/ . 930, 10/ . 979, 15/ . 990, 20/1,45 

RIMINI tUNCnCN F<N17,C8 

0, 0/ . 554 , 2/ . 820, 4/ . 916, 6/ . 944 , 8 
.980, 10/. 994, 16/1, 28 



Function: lATs for 11-12 



Function: lATs for 12-1 



EUnction: lATs for Afternoon 



EUncLion: Service Times for Question Type 1 



EUnction: Service Times for Question Type 2 



Function: Service Ti.mes for C^estion Type 3 



Elinction: Service Ti.mes for Stat Calls 



EUnction: Service Times for Taking Calls 



EUnction: Return Call I'ime for Group 1 



227 



134 
135 

136 
137 
138 
139 
140 
141 
142 
143 
144 
145 
146 
147 
148 
149 
150 
151 
152 
153 
154 
155 
156 
157 
158 
159 
160 
161 
162 
163 
164 
165 
166 
167 
168 
169 
170 
171 
172 
173 
174 
175 
176 
177 
178 
179 
180 
181 
182 
183 
184 
185 
186 
187 
188 
189 
190 
191 
192 
193 
194 
195 
196 
197 
198 
199 
200 
201 
202 
203 
204 
205 
206 
207 
208 



R'IMIN2 EUNCTICN RN18,C9 
0, 0/ . 438, 2/ . 700, 4/ . 877, 6/ . 894, 8 
.95, 10/. 961, 12/. 983,16/1, 48 

RTMIN3 EUNCTICN RNl 9 , CI 2 
0, 0/ . 358, 2/ . 538, 4/ .769, 6/ . 829, 8 
. 880, 10/ . 897, 12/ . 923, 16/ . 940, 22 
.966,26/. 982,36/1, 46 

RTSTAT RJNCriCN RN20,C6 
0, 0/ . 610, 2/ . 837, 4/ . 938, 6/ . 940, 8/1,10 



Elmction: Return Call Time for Group 2 



Function: Return Call TirtB for Group 3 



E\jnction: Return Call Timo for Stat Ouestions 



i^'^-ir-k-k-ki^^-^-^-k-k^^^-ie-k-k-k-k-k-k-k-i^-k-k-k-k 



10 

11 

12 
13 
14 
15 
16 
17 
18 
19 
20 
21 
22 
23 
24 
25 
26 
27 
28 
29 
30 
31 
32 
33 
34 

35 



36 



* GENERTOE CKLLS 

i- -il^ it * * + i- ■t -t * it ■Jr ■*- -Jf * -fc -Jr + * *- ^t ^t ^t ■^ ■t ■J; * 4r ^t ^t ^ ^ -fc -Jf + ^ 

GENERATE (FN(IAT9)*iAM) , , (&CL*480) , , ,10PH 

TESr G CI, ( S,CL*480) , PASTCLK 

TEST LE CI, (S,CL*480+59),PflSrCLK 

ASSIOM 5,FN(QJEST9) 

TRPNEER ,C7\11£R 

GENERATE (EN(IAT10)*5>AM) , , (S.CL*480+59) , , ,10PH 

TEST GE CI, (&CL*480+60),PASTCLK 

TEST IE CI, (&CL*480+119),PASTa.K 

ASSIOJ 5,EN(gjEST10) 

TRANSFER ,miI£R 

GENERATE (EN(;[AT11)*&AM) , , (&CL*480+119) , , ,10PH 

TEST GE CI, (&a>480+120),PASrCLK 

TEST LE CI, (S,CL*480+179),PASrCLK 

ASSIGN 5,EN(QJEST11) 

TRANSFER , CALLER 

GENERATE (EN(IAT12)*&AM) , , (&CL*480+179) , , ,10PH 

TEST GE CI, ( &CL*480+180) , PASTCLK 

TEST I£ CI, ( 6,CL*480+239) , PASTCLK 

ASSIGN 5, FN(aJl^Sn,2) 

TRANSFER , CALLER 

GENERATE (EN(IATAET)*&AM) , , (&CL*480+239) , , ,10PH 

TEST GE CI, (5,CL* 4 8 0+2 40), PASTCLK 

TECT LE CI, (&CL*480+480), PASTCLK 

TIMEl TEST L CI, (&CL*480+300) ,TIME2 

PSSIW 5,EN(QJEST13) 

TRANSFER ,CAI,.LER 

TIME2 TEST L CI, (&CL*480+360) ,TIME3 

ASSIGN 5,FN(gjEST14) 

TRANSFER , CALLER 

TIME3 TESTE CI, (&CL*480+420) ,TIME4 

ASSIGN 5,FN(aJEST15) 

TRANSFER , CALLER 

TIME4 ASSIGN 5, ro(ClJEST16) 

TRANSFER , CALLER 



Generate Callers From 9-10 



Generate Callers From 10-11 



Generate Callers From 11-12 



Generate Callers From 12-1 



Generate Afternoon Callers 
& Shut Down Phones 



PAsrraK terminate o 



Terminate extra arrivals from other GEN blocks 



* CAIX ARRIVES * 

•k-k-k-k-kk-k-k-k-k-k-k~k~k-kkkk-k^^-k-k~kkkk-k-k-k-k~kk-k-k-k-k-k-h-ki-if-k-k-k^^^-k^-k-k"k-k-k-i^r^^-^-k-k-k-k*-kk-kk 



CAUER ASSIGN 



6,1 



37 




PRIORITY 


11 


38 


I'KYH-IN 


TRANSFER 


BOTH, PHNLNE, BUSY 


39 


PUNINE 


ENTER 


LINES 


40 




TRANStEK 


,SELSTUD1 


41 


BUSY 


BLET 


PH6=PH5+1 


42 




TEST G 


P6,2,REPEAT1 


43 




TESI' G 


P6,3,REPEAT2 


44 


BALKL 


TER^4INATE 






Assign No. of Tries 

Incoming Calls Have iiighest Priority 

Test Phone Lines 

Phone call arrives 



Try two times 
Balk, Phone is Busy 



■^■k-^'k'ii'ii'ii-k'k-ififk'if^-k'^-ifk-k-k'k-k-k'k'k-k-h 



'^•ie-k-ii-k-k'^'ir-k'^-k'k'ififie'ifii-k-k-k-ii'ifk-ifk'if'k-k-ifk 



228 



209 
210 
211 
212 
213 
214 
215 
216 
217 
218 
219 
220 
221 
222 
223 
224 
225 
226 
227 
228 
229 
230 
231 
232 
233 
234 
235 
236 
237 
238 
239 
240 
241 
242 
243 
244 
245 
246 
247 
248 
249 
250 
251 
252 
253 
254 
255 
256 
257 
258 
259 
260 
261 
262 
253 
264 
265 
266 
267 
268 
269 
270 
271 
272 
273 
274 
275 
276 
277 
278 
279 
280 
281 
282 
283 



* SELECT STUDt-NT' TO ANSWER CALL * 



64 
55 
55 
57 



SELSTUDl SELECT m 1,1,&SV, , ,RINGCUT 



46 


SEI_STUD2 


ASSIC3>J 


i,v{sruD) 


47 




GATE LR 


P1,SEL5IUD2 


48 




TRANSFER 


,TAKEGALL 


49 


RINGOJT 


LEAVE 


LINES 


50 




BLET 


PH^PH6+1 


51 




TEST G 


P6,2,REPEAT1 


52 




TESl' G 


P6,3,REPEAT2 


63 


EALK2 


TERMINATE 






* PHCNE WAS BUSY OR NO CNE ANSWERED 

REPEATl ADVANCE ] 

TRANSFER ,TRYPIN 

REPEAT2 ADVANCE EN (BUSY) 

TRANSFER ,TRYPHN 



Clieck if a Stixient Avail, to Take Call 

Randomly Assign a Student 

Check to see if Student is on phone 

If not on phone, student takes call 

Everyone is Busy 

Try two times 

Balk, No available to answer phone 

■k -k -k -k -k -k 'k -k -k ■k 'k ■k 'k -k -k -k 'k 'k -k -k -k 'k 

Wait one rtiLnute then call again 
Wait 5-10 minutes then call again 



* TAKE PHCNE CAIL 

■k'k'k-k-ifk-k-k-k-k-k'k'k-k-k-k-k-k-k-k-k'k 



kkk-k±-i:-k-k'kk-k-k-k-*:***-k-k*-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k-k 



•k-kk-k-k-k'k*k-k-k-i:-k*k-k-k-k 



*STUm\rT TAKES GALL' 



•kk'k'k'k'k-k-k-k-k'k'k'k'k-k-k-k-k-k-k-k-k 



58 


TAKEOyUl BLET 


&EP^4DA+1 


59 


BLET 


S,TAi=&TA+l 


60 


QUEUE 


TOTALQ 


61 


QUEUE 


ANSWERQ 


62 


PRE,E>IPT 


PI, PR 


63 


DCGIC S 


PI 


64 


AWANCE 


FN(TCMIN) 


65 


RETURN 


PI 


65 


LEAVE 


LINES 


67 


DEPART 


ANSWERQ 


68 


LCGIC R 


PI 



Count [Daily Arrivals 

Count Total Arrivals 

Track Total Service Time 

Track Time to Answer Call 

Interrupt Current WDrk to Answer Phone 

Cn Phone 

Take Infornation 



Hang Up Phone 



69 

70 
71 
72 
73 
74 
75 
76 
77 
78 
79 
80 



81 
82 
83 
84 
85 
86 
87 



^•ir-k-k-k-i^-k-k-k-k^QJ^^ CUESTICN TYPES^"''"*''^'***'**'*^'*'^***'*'*****'^'^'^'** 

SETTYP TRANSFER ,FN(SETTYP) 



QTYPEl ASSIO^ 2,FN(QrYPEl) 

PRIORITY P5 

TRANSFER ,TESTQ] 

arYPE2 ASSIGN 2,EN(QrYPE2) 

PRIORITY P5 

TRANSFER ,TESTQ1 

QIYPE3 ASSIGN 2,FN(aTYPE3) 

PRIORITY P5 

TRANSFER ,TESTQ1 

QTYPE4 ASSIGN 2,FN(QTYPE4) 

PRIORITY P5 



Set Question Type 

If Question Type 1 Call QJESTl Ftinction 

If ^Xiestion T-ype 2 Call QUESr2 Flinction 

If Question Type 3 Call QUESTS Flinction 



**ASSia\| SERVICE TIMES' 



'-kif^^^r^-k-k-k-k-ki^-kk 



TESPQl TEST E 
ASSIOM 
TRANFER 

TESTQ2 TEST E 
ASSIGN 



P2,1,TESTQ2 

3,FN(GRCUP1) 

,AGNTIME 

P2,2,TESTQ3 

3,EN(GRCUP2) 



TRANSFER ,AGNTIME 
TESTQ3 ASSIGN 3 , FN ( GRCU P3 ) 



Assign Service Time for Question Type 1 
Assign Service Time for Question Type 2 
Assign Service Time for Question Type 3 



*MARK ENTRANCE TOAE AND DAY** 



•k'k'kifk-k'k-k-k-k-k-kk'k-k'kifk'k 



88 AGNTIME ASSIGN 7, CI 

89 ASSIGN 8,&D 

90 QUEUE BQAREQ 



Track Arrival Time 
Track Arrival Day 
Post on Bulletin Board 



229 



284 
285 
286 
287 
288 
289 
290 
291 
292 
293 
294 
295 
296 
297 
298 
299 
300 
301 
302 
303 
304 
305 
306 
307 
308 
309 
310 
311 
312 
313 
314 
315 
316 
317 
318 
319 
320 
321 
322 
323 
324 
325 
326 
327 
328 
329 
330 
331 
332 
333 
334 
335 
336 
337 
338 
339 
340 
341 
342 
343 
344 
345 
346 
347 
348 
34 9 
350 
351 
352 
353 
354 
355 
356 
357 
358 



91 
92 
93 

94 

95 

96 

97 

98 

99 

100 

101 

102 

103 

104 

105 

105 

107 

108 

109 

1]0 

111 
112 
113 
114 
115 



116 
117 
118 
119 
120 
121 
122 
123 
124 



FIND WHO'S AV7\I.1J\E?L£ TO WORK CN quesTICN***************** 

Check i.f Stixient who answered Is available 
Check if a Student Avail, to Take CJill 



AVSTUDl TESTE Q(P1 ) , 0,AVSTUD2 

TRMMEER ,ENTSVC 

AVSTUD2 SEIECT E 1, 1, &SV, 0,Q,AVSTUD3 

TRANSFER , ENTSVC 



AVSTUD3 TEST G P5,7,AVSTUD5 

AVSTUD4 TRANSFER EOTH, FSTl,OrHSTl 

FSTl PREEMPT PI, PR 

ADVANCE 

RETURN PI 

TRANSFER , ENTSVC 

OTHSTl ASSI(3^I 4,&SV 

OrHST2 TRANSFER BOTH, , LOOPl 



Get First Available Student If Stat 
Try Answering Student First 

Then, Try A1.1 Others 



LCOPl 



PREEMPT 

AEWANCE 

RETURN 

PSSIW 

TRANSFER 

LOOP 

AEWANCE 

TRANSFER 



AVSrUD5 TRANSFER 
BIXT 
TEST NE 
BLET 

ENTSVC TRANSFER 



P4,PR 



P4 

1,P4 

, ENTSVC 

4,OTHST2 

.25 

,AVSTUD3 

.5,, ENTSVC 
S,STD=V(STUD) 

P1,&STD,AVSTUD5 

PH1=S;STD 

, FN (ENTSVC) 



WCRK CN QUESTICN 



loop Until Student Becomes Available 



50% of Time, Student Answering Works Qjestion 
Otherwise, Randomly Assign a Student 



* + S^^-*******it^* + itit**-*r-*r*^**-fe 



STAT ASSIOM 

TFNR PREEMPT 

QJEUE 

DEPART 

QJEUE 

ADvTOJCE 

RETIJF^ 

DEPART 

TRANSFER ,WATTRTN 



3,FN(STATTM) 

PI, PR 

PI 

BCARDQ 

SERVICEQ 

P3*&STM 

PI 

SERVICEQ 



r^- + A-*** + + Tlr^it 



Work on Question 

Done Answering Q-iestion 



RETURN ANSWER 





k^-k^-k-h-k-k-k-ir-lc-irif^-lr-k^-i:*^ 


tr^*Hr^^^it^^^*^ 


125 


WAITRTO 


OJEUE 


RETURNQ 


126 


RTQl 


TEST E 


P5,8,RTQ2 


127 




PRIORITY 


9 


128 




Tr^ANSFER 


,CNPHN 


129 


RTQ2 


TEST E 


P5,6,RTQ3 


130 




PRIORITY 


7 


131 




TRANSFER 


,CNPHN 


132 


RTQ3 


PRIORITY 


5 


133 


OMPHN 


GATE LR 


PI 


134 




PREEMPT 


PI, PR 


135 




ENTER 


LINES 


136 




LOGIC S 


PI 


137 


DCNEl 


TEST E 


P5,8,Da\E2 


138 




ADVANCE 


FN(RTSTAT) 


139 




TRANSFER 


,RETANS 


140 


Da^JE2 


TEST E 


P2,l,DaJE3 


141 




ADVANCE 


FN(RTMNl) 


142 




TRANSFER 


,RET/WS 


143 


ra-K3 


TEST E 


P2,2,CCNE4 


144 




ADVANCE 


EN(RTMIN2) 


145 




TRANSFER 


,RETANS 


146 


Da\E4 


AEWANCE 


EN(RTKIN3) 



■i- 'i- i- * 't'i' ^l- it -fe + 't 4? 



Track Time to Return Call 
Return Stat Calls IrrmediatGly 



Finish Working on a Stat Questions First 

Return Date and No Rush Answers As Time Mlows 

Make Sure Student is Not Answering Call 
Return Answer 

On Phone 



Talk to Caller 



230 



I£AVE 


LINES 




LCGIC R 


PI 




RE'IIJRN 


PI 




DEPART 


1«'IU1^Q 




CEPART 


PI 




DEPAP-T 


TOTALQ 




■k-k-kf^-k-k-k-k-k 


■k-k^-^-k-k--k--).--i,--^--^--^- 


■t: -1- -i- •^- *- -t + ■*? ■!■ -i- 



359 147 RETANS 

360 148 LCGIC R PI Hang Up Phone 
351 149 

362 150 

363 151 
354 152 
365 

366 *****J 

367 * FINISHED: Count Delays and logout * 

369 

370 153 DISI'AT TEST E P5,8,DLTOmY Track Delays 

371 154 ASSIGN 9,BV(DELAY1) 

372 165 DLTOH\Y TEST E P5, 6, DLDATE 

373 156 ASSIGN 9,BV(DEI.AY2) 

374 157 TRANSFER ,LOGarT 

375 158 DLB\TE TEST E P5,4,LCGaJT 

376 159 ASSIGN 9,BV(DeiJ\Y3) 

377 160 TRANSFER ,IJCGCUT 
378 

379 161 LOBCUT BLET &DLCT=5,DLCT+P9 Count Delays 

380 162 BLET &SVPF=(30. 34* (N(IjCGa;T)-&DDCT)_ Average SERVPERF 

381 162 +36.29*S,DLCT)/N(LCGaJT) 

382 * TOTALQ QTABLE TOTAIQ, 0,15, 540 

383 * SERVICEQ QTABLE aEFW:iCEQ, 0,10, 480 
384 

385 163 TERMINATE Logout and put in QA Bin 

386 

388 * RAMDCMLY CHECK NUMBER IN SERVICE * 

390 

391 164 GENERATE 480, ,0 

392 165 NEXTTM AWANCF, EN(RIMCHK) 

393 166 B1£T S,HaB=(Cl-SCL*480)/60 

394 167 ASSIGN 1,&HCUR 

395 * TABUIATE NIS 

396 * BHJTPIC LINES=1,FILE^SYSPRINT, (&I,S,D,iHCUR,Q(TOrALQ)) 

397 *I: ** Day: ** Hour: *■*.** Nunber in Servi-ce: ** 

398 *■ NIS TABLE QdOTAUQ) , 0,1,20 

399 168 ADWWCE 60-{5,HCUR-Pl)*60 

400 169 TEST GE SHCUR, 7 , NEXTIM 

401 170 TERMINATE 
402 

403 

405 * MODEX EO^ a« DAY * 

407 

408 171 GENERATE 480 Calls taken from 9:00a-5:00p 

409 172 TERMINATE 1 End of Day 

410 
411 

413 * Run for One Msnth (20 Days): Non-Antithetic * 

ATA ******************************************************************** 

415 

416 FUTPIC LINES=l,ni£=SYSPRINr 

417 SSIM S I UTIL L W LQ WQ TA IXNE D1£T DE1AY% SERVPF 
418 

419 LET &m=l 

420 

421 1 

422 1 

423 1 

424 2 

425 2 

426 2 

427 3 DO &T=],30 I Paired Replications 

428 3 

429 4 DO &RM=1,20 Warm Up for non-antithetic run 

430 4 

431 3 

432 4 

433 4 



DO 


SSTMC-l, 9 


UST 


&STM-.75+&SIMC*0.05 


DO 


S,SVG=1,5 


LET 


SSV=&SVC 



DO 


&RM=1,20 


RMULT 


100000* &RM 


ENDDO 




DO 


S[>1,5 


LET 


Sd^&EHl 



231 



434 


4 


435 


4 


436 


3 


437 


3 


438 


3 


439 


3 


440 


3 


441 


3 


442 


3 


443 


3 


444 


3 


445 


3 


446 


3 


447 


3 


448 


3 


449 


3 


450 


3 


451 


3 


452 


3 


453 


3 


454 


3 


455 


3 


456 


3 


457 


3 


458 


3 


459 


3 


460 


3 


461 


3 


462 


3 


463 


3 


464 


3 


465 


3 


466 


3 


467 


3 


468 


3 


469 


4 


470 


4 


471 


4 


472 


4 


473 


4 


474 


4 


475 


4 


476 


3 


477 


3 


478 


3 


479 


3 


480 


3 


481 


3 


482 


3 


483 


3 


484 


3 


485 


3 


486 


3 


487 


3 


488 


3 


489 


3 


490 


3 


491 


3 


LINES=1 


,ni 


4 92 


3 


493 


3 


494 


3 


495 


3 


496 


3 


4 97 


3 


498 


3 


499 


3 


500 


3 


501 


3 


502 


3 


503 


3 


504 


3 


505 


3 


506 


3 


507 


3 



LET 


SCP^O 


srmT 


1,NP 


ENDIX) 




RESET 




LET 


i,Tft=0 


LET 


&CL=0 


LET 


SDLCI^O 


LET 


SDLPOO 



Clear Variables for Next Run 



98500+2500*sI,_ Set &I-th non-antithetic RNl Seed 

198500+2500*£.I,_ Set &I-th non-antithetic RN2 Seed 

298500+2500*&l,_ Set Sl-th non-antithetic RN3 Seed 

398500+2500*&I, Set sl-th non-antithetic RN4 Seed 

498500+2500*sl,~ Set Sl-th non-antithetic RN5 Seed 

598500+2500*Sl,_ Set &I-th non-antithetic RN6 Seed 

698500+2500*&I,_ Set &I-th non-antithetic RN7 Seed 

798500+2500*5,1, _ Set &I-th non-antithetic RN8 Seed 

8 98500+2500* &!,_ Set sl-th non-antithetic RN9 Seed 

998500+2500*41, Set 5,1-th non-antithetic RNIO Seed 

1098500+2500*Sl7_ Set &I-th non-antithetic RNll Seed 

1198500+2500*&I,_ Set &l-th non-antithetic RN12 Seed 

1298500+2500*&I,_ Set &I-th non-antithetic RN13 Seed 

1398500+2500*5:1, _ Set &I-th non-antithetic RN14 Seed 

1498500+2500*5,1, _ Set S,I-th non-antithetic RN15 Seed 

1598500+2500*&I,_ Set sl-th non-antithetic RN16 Seed 

1698500+2500*SI,_ Set &I-th non-antithetic RNIT Seed 

1798500+2500*SI,_ Set Sl-th non-antithetic RN18 Seed 

18 98500+2500* SI, _ Set &I-th non-antithetic RN19 Seed 

1998500+2500*SI Set 5,1-th non-antithetic RN20 Seed 



*N:* 



EUTPIC 


LINES=3, ETLE=SYSPRINT, ( &SV, SSTM, SI ) 


ITHETIC VARIATES SERVERS:** ST MODIETER: ** .** REPLICATICN: 


CO 


SI>=1,19 


LET 


SCL=SI>1 


LET 


iU^O 


START 


1,NP 


1£T 


SAVISHy,(TarALQ) 


PUTPIC 


L1NES=1, ETLE=SYSPRINT, (SI, &D, &CR, STA, SAVIS) 


y: ** Arrivals: ** Ttl. Arrivals: **** Avg . Contents : **.** 


ENDDO 




LET 


SCL^SD-1 


LET 


SEP^O 


START 


1,NP 


LET 


SJWUT= ( ER1+ER2+ER3+ER4+FR5 ) /SSV 


LET 


&DLP&=&DJXT/N ( LCGCUT ) 


LET 


SCIi<MIN=Cl 


LET 


SAVAR=STA/SD 


LET 


sTg\i=Q^(TarALQ) 


LET 


&TQl"^^(TarALQ) 


LET 


SBaM=QA{ECftRDQ) 


LET 


SBQr-QT(BCftRDQ) 


lET 


&LCET=N(L03a/r) 



PUT PIC 
LINE^l, m&SYSPRINT, ( SSTM, &SV, SI, SAVUT, &TQi, STQT, SBC^I, SllOT, STA, SLCGT, SDLCl, SDLPC, SSVPF) 

jvjy. 1- ■*,* * ^* -l^^i^^ _-k-^--k- -J,--X-i- It* ***5t_** -k-k-k ^k^ -k-k-kk^kk kkk kkkk ^-kk-k kkk ^kkkk ^-k-k^-k-k-k 

* n/TPlC L1NES=1,F1LE=SYSPRINT, (S1,&D,S[JA,STA,SAVIS) 
*N:** Day: ** Arrivals: ** Ttl. Arrivals: **** Avg . Contents : **.** 

* FUTPIC LINES=1, ni&SYSPRINT, (N (LCGOJT) , SDLCT, SDLFC, SAVAR) 
*No.Corrp.: *** No. Delayed: *** Pct.telayed: *.*** Avg. Day: **.** 

* FUTPIC LINI';S=l,nU^SYSPRINT, (SCLKMIN,ar(SERVICEQ), SSVPF) 
*a_OCK: ***** Avg. Service: ***.** Avg.SERVPERF: ***.** 

CLEAR Clear for Antithetic Replication 

kkkkkkkk-kkkkkk-k-kk-kkk-kk-kkkkkkkkkkkkk-kkk-k-k-kkk-kkkkkkkkkkkkk'kkk-kkkkkkkk* 

* Run for One Month (20 Days) : Antithetic * 

kkkk-k-k-k-kk-k-kk-k-k-kkkkkk-kkkkkkkkkkk-kkk-k-k-kk-kkkkkkkkkk'kkkk'k-k-kk-kk-k-kkkk-kkkkk 



232 



508 


4 


509 


4 


510 


3 


511 


4 


512 


4 


513 


3 


514 


3 


515 


4 


516 


4 


517 


4 


518 


4 


519 


3 


520 


3 


521 


3 


522 


3 


523 


3 


524 


3 


525 


3 


525 


3 


527 


3 


528 


3 


529 


3 


530 


3 


531 


3 


532 


3 


533 


3 


534 


3 


535 


3 


536 


3 


537 


3 


538 


3 


539 


3 


540 


3 


541 


3 


542 


3 


543 


3 


544 


3 


545 


3 


546 


3 


547 


3 


548 


3 


54 9 


3 


550 


3 


551 


4 


552 


4 


553 


4 


554 


4 


555 


4 


556 


i 


557 


i 


558 


3 


559 


3 


560 


3 


561 


3 


562 


3 


563 


3 


564 


3 


565 


3 


566 


3 


567 


3 


568 


3 


569 


3 


570 


3 


571 


3 


572 


3 


573 


3 


lim:s=i 


El 


574 


3 


575 


3 


576 


3 


577 


3 


570 


3 


579 


3 


580 


3 


581 


3 



DO 


5,RM=1,6 


Warm Uf 


RMUIJ 


100000* &RM 




ENDCO 






CO 


&RM-^7,20 




RMLILT 


-(iooooo*&m) 




ENDDO 






DO 


S[>=1,5 




urr 


&CL=5,D-1 




lET 


&m=o 




simT 


1,NP 




ENDCO 






RESET 






LET 


STPf=0 


Clear ^ 


LET 


5,0^0 




I_£T 


&DLCr=0 




LET 


&DLPC=0 




RMULT 


98500+2500*5,1, 


Set SI 




] 98500+2500*5,1, 


Set SI 




298500+2500*5,1, 


Set SI 




398500+2500*51, 


Set SI 




498500+2500*51, 


Set 51 




598500+2500*51, 


Set 51 




-(598500+2500*51), 


Set 51 




-(798500+2500*51), 


Set 51 




-(898500+2500*51), 


Set 51 




-( 998500+2500*5,1 ),"" 


Set &I 




-(1098500+2500*s;i), 


Set SI 




-(1198500+2500*51), 


Set SI 




-(1298500+2500*51), 


Set 51 




-(1398500+2500*51), 


Set SI 




-(1498500+2500*51), 


Set SI 




-(1598500+2500*51), 


Set SI 




-(1698500+2500*&I), 


Set SI 




-(1798500+2500*51), 


Set SI 




-(1898500+2500*5,1), 


Set SI 




-(1998500+2500*51) 


Set SI 



Warm Up for antithetic run 



Clear Variables for Next Run 



-th non-antithetic RNl Seed 
-th non-antithetic RN2 Seed 
-th non-antithetic RN3 Seed 
-th non-antithetic RN4 Seed 
-th non-antithetic RN5 Seed 
-th non-antithetic RN5 Seed 
-th antithetic RN7 Seed 
-th antithetic RN8 
■th antithetic RN9 Seed 
■th antithetic RNIO Seed 
-th antithetic RNll Seed 
-th antithetic BN12 Seed 
-th antithetic RN13 Seed 
-th antithetic RNl 4 Seed 
-th antithetic RNl 5 Seed 
-th antithetic RNl 5 Seed 
-th antithetic RN17 Seed 
-th antithetic RN18 Seed 
-th antithetic RNl 9 Seed 
-th antithetic RN20 Seed 



* RJTPIC LINES=3,n:LE>=SYSPRINT, (SS-VjSSlMfSl) 

* 

*NCN-7WTITHETIC VARIATES SERVERS:** ST MODIFIER: **.** REPLICRTICN: ** 



DO 


S[>1,19 


LET 


SCI^&IM 


LET 


SEPf=0 


START 


1,NP 


ij;t 


SAVIS=^(TOIALQ) 


Eurpic 


LINES=1, EILE^^SYSPRINT, ( SI, SO, SDA, STA, SAVIS) 


: ** Arrivals: ** Ttl .Arrivals: **** Avg . Contents : **. 


ENDCO 




LET 


&CL=SD-1 


LET 


SCPi=0 


START 


1,NP 


LET 


SAVUT= ( ER1+ER2+ER3+ER4+FR5) /SSV 


LET 


SDLK>SD1£T/N ( LCGCUT ) 


lET 


SCIJ<MIN=C1 


lET 


SAVAR=STA/&D 


LET 


STa^=QA(T0TAL2) 


LET 


STar=QT (TOTALS) 


I£T 


&EaJ=<y\(ECAREQ) 


LET 


&EQP=QT(BCARD2) 


I£T 


&LCGT=N( LOGOUT) 



PUTPIC 
nLE=SYS PRINT, ( SSIM, SSV, Si, SAVUT, ilQi, STQT, SBOJ, SEQT, STA, &LCGT, &CLCT, SCLPC, SSVPF) 



AV: 



** -k ** k-k-k-k kk-k kkk k-k k-kkk k-k kkk kk kk-kk kk kkk kkkk 



PUTPIC LINES=1 , ni£=SYSPRINT, ( SI , SD, SDA, STA, SAVIS ) 

*N:** Day: ** Arrivals: ** Ttl. Arrivals: **** Avg . Contents : **.** 

* HJTPIC LINES=l,nLE^SYSPRINT, (N(LOGajT),SDLCr,SDLPC,SAW\R) 
*No.Corrp. : *** No. Delayed: *** Pct.telayed: *.*** Avg. Day: **.♦* 

* PUTPIC U:NRS-1 , FILE=SYSPRINT, ( SCLPMIN, QT ( SERVICEQ) , SSVPF) 



583 


3 


584 


3 


585 


2 


586 


1 


587 




588 





233 



•'•'CLDCK: ***** Avg. Service: ***.** Avg.SERVPERF: ***.** 

n FAR Clear for next i-th iteration 

ENDED 

RNDDO 

ENDDD 

END 



LIST OF REFERENCES 

Angaran, D. M. (1993). The Triad in Quality Assessment: Structure, Process, and 

Outcome. Administrative & Practice Trends in Pharmaceutical Care, 1 (2), 9-11. 

Babakus, E., & Boiler, G. W. (1992). An Empirical Assessment of the SERVQUAL 
Scale. Journal of Business Research, 24 (Mav), 253-268. 

Babakus, E., & Mangold, W. G. (1989). Adapting the SERVQUAL Scale to Health Care 
Environment: An Empirical Assessment. In P. Bloom, R. Winer, H. H. Kassarjian, 
& D. L. Scammon (Eds.), Enhancing Knowledge Development in Marketin g (pp. 
195). Chicago, IL: American Marketing Association. 

Babakus, E., & Mangold, W. G. (1992). Adapting the SERVQUAL Scale to Hospital 
Services: An Empirical Investigation. HSR: Health Services Research. 26 (6), 
767-786. 

Balci, O. (1989). How to Assess the Acceptablility and Credibility of Simulation Results. 
In E. A. MacNair, K. J. Musselman, & P. Heidelberger (Eds), Proceedings of the 
Winter Simulation Conference (pp. 62-71). Washington D.C. 

Beaird, S. L., Coley, M. R., & Crea, K. A. (1992). Current Status of Drug Information 
Centers. American Journal of Hospital Pharmacy, 49 (Jan), 103-6. 

Bitner, M. J. (1990). Evaluating Service Encounters: The Effects of Physical Surroundings 
and Employee Responses. Journal of Marketing, 54 ( April), 69-82. 

Boerstler, H., Foster, R. W., O'Connor, E. J., O'Brien, J. L., Shortell, S. M., Carman, J. 
M., & Hughes, E. F. X. (1996). Implementation of Total Quality Management: 
Conventional Wisdom versus Reality. Hospital and Health Services 
Administration, 4U 2), 143-159. 

Bolton, R. N., & Drew, J. H. (1991). A Longitudinal Analysis of the Impact of Service 
Changes on Customer Attitudes. Journal of Marketing, 5 5 (January), 1-9. 

Bolton, R. N, & Drew, J. H. (1994). Linking Customer Satisfaction to Service Operations 
and Outcomes. In R. T. Rust & R. L. Oliver (Eds), Service Quality: New 
Directions in Theory and Practice (pp. 173-200). Thousand Oaks, CA: Sage 
Pubhcations. 



234 



235 

Boulding, W,, Kalra, A., Staelin, R., & Zeithaml, V. A. (1993). A Dynamic Process Model 
of Service Quality: From Expectations to Behavioral Intentions. Journal of 
Marketing Research, XXX (Februarv), 7-27. 

Brown, S. W., & Swartz, T. A. (1989), A Gap Analysis of Professional Service Quality. 
Journal of Marketing, 53 f April). 92-98. 

Brown, T. J., Churchill, G. A., Jr., & Peter, J. P. (1993), Improving the Measurement of 
Service Quality. Journal of Retailing, 69 (1), 127-140. 

Buxton, K. v., & Gatland, R. (1995). Simulating the Effects of Work-In-Process on 
Customer Satisfaction in a Manufacturing Environment. In C Alexopoulos, K, 
Kang, W. R. Lilegdon, & D. Goldsman (Eds), Proceedings of the Winter 
Simulation Conference (pp. 940-944). Arlington, VA, 

Carman, J, M, (1990), Consumer Perceptions of the Service Quality: An Assessment of 
the SERVQUAL Dimensions. Journal of Retailing. 66 (Sprin.g), 33-55. 

Carruthers, M. E. (1970), Computer Analysis of Routine Pathology Work Schedules 
Using a Simulation Programme, Journal of Clinical Pathology, 23 , 269-272, 

Carson, J, S,, II, (1989). Verification and VaHdation: A Consuhant's Perspective. In E. A. 
MacNair, K. J, Musselman, & P, Heidelberger (Eds), Proceedings of the Winter 
Simulation Conference (pp, 552-557), Washington, D,C, 

Chin, V,, & Sprecher, S, C, (1990). Using a Manufacturing Based Simulation Package to 
Model a Customer Service Center. In O, Balci, R, P, Sadowski, & R, E, Nance 
(Eds,), 1990 Proceedings of the Winter Simulation Conference Proceedings (pp, 
904-907), New Orleans, LA. 

Christensen, D. B., & Penna, P. M, (1995), Quality Assessment and Quality Assurance of 
Pharmacy Services. Journal of Managed Care Pharmacy, 1 (1), 40-51. 

Clemmer, E. C, & Schneider, B. (1989a), Toward Understanding and Controlling 
Customer Dissatisfaction with Waiting During Peak Demand Times, In M, J, 
Bitner & L, A. Crosby (Eds.), Designing a Winning Service Strategy (pp, 87-91), 
Chicago, IL: American Marketing Association, 

Clemmer, E, C, & Schneider, B. (1993). Managing Customer Dissatisfaction with 

Waiting: Applying Social-Psychological Theory in a Service Setting. Advances in 
Services Marketing and Management, 2 , 213-229, 

Clemmer, E,C,, & Schneider, B, (1989b), Toward Understanding and Controlling 

Customer Dissatisfaction With Waiting (89-1 15). Cambridge, MA: Marketing 
Science Institute. 



= 



236 



Cohen, J. (1977). Statistical Power Analysis for the Behavioral Sciences . New York, NY; 
Academic Press. 

Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences . (2nd ed). New 
York, NY: Academic. 

Crocker, L., & Algina, J. (1986). Introduction to Classical and Modern Test Theory . Fort 
Worth, TX: Harcourt Brace Jovanovich College Publishers. 

Cronin, J. J., Jr., & Taylor, S. A. (1992). Measuring Service Quality: A Reexamination 
and Extension. Journal of Marketing, 56 (July), 55-68. 

Cronin, J. J., Jr., & Taylor, S. A. (1994). SERVPERF versus SERVQUAL: Reconciling 
Performance-based and Perceptions-Minus-Expectations Measurement of Service 
Quality. Journal of Marketing, 58 (January), 125-131. 

Dallal, G. E. (1986), PC-SIZE: A Program for Sample-Size Determinations. The 
American Statistician. 40 , 52. 

Davis, M. M. (1991). How Long Should a Customer Wait for Service? Decision Sciences, 
22(2), 421-434. 

Davis, M. M., & Vollmann, T. E. (1990). A Framework for Relating Waiting Time and 
Customer Satisfaction in a Service Operation. The Journal of Services Marketing, 
4(1), 61-69. 

DeVellis, R. F. (1991). Scale Development: Theory and AppHcations . (Vol. 26). 
Newbury Park, CA: Sage Publications. 

Dillman, D. A. (1978). Mail and Telephone Surveys . New York, NY: John Wiley and 
Sons, Inc. 

Dillman, D. A. (1994). How to Conduct Your Own Survey . New York, NY: John Wiley 
and Sons, Inc. 

Dube'-Rioux, L., Schmitt, B. H., & Leclerc. (1989). Consumers' Reactions to Waiting: 
When Delays Affect the Perception of Service Quality. In T. K. Srull (Ed.), 
Advances in Consumer Research (Vol. 16, pp. 59-63), Provo, UT: Association for 
Consumer Research. 

Duraiswamy, N., Welton, R., & Reisman, A. (1981). Using Computer Simulation to 

Predict ICU Staffing Needs. The Journal of Nursing Administration. 1 1 (2), 39-44. 

Fleiss, J. (1981). Statistical Methods for Rates and Proportions . New York, NY: John 
Wiley & Sons, Inc. 

Geisler, E. (1996). Cleaning Up After Reengineering. Business Horizons. 39 (5), 71-78. 



237 



Green, S.B. (1991). How Many Subjects Does It Take To Do A Regression Analysis? 
Multivariate Behavioral Research, 26 (3). 499-510. 

Gronroos, C. (1988). Service Quality: The Six Criteria of Good Service Quality. Review 
of Business, 3 , 12. 

Gronroos, C. (1990). Service Management and Marketing . Lexington, MA: Lexington 
Books. 

Gronroos, C. (1992). Quality Comes to Service. In E. Cheuing, E. & W. F. Christopher 
(Eds.), The Service Quality Handbook (pp. 17-24). New York, NY: American 
Management Association. 

Gronroos, C. (1993). Toward a Third Phase in Service Quality Research - Challenges and 
Future Directions. In T. A. Swartz, D. E. Bowen, & S. W. Brown (Eds), 
Advances in Services Marketing and Management (Vol. 2, pp. 49-63). Greenwich, 
CN: JAI Press, Inc. 

Halberg, D.L., Russell, W, Hatton, R.C., Segal, R, Guyton, T.S., Paulus, D.A. (1996). 

Pharmacoeconomic Evaluation of Anesthesia in Ambulatory Surgery: Comparison 
of Desflurane to Isoflurane and Propofol. Pharmacy Practice Management 
Quarterly, 16(2), 71-85. 

Hammer, M., & Champy, J. (1993). Reengineering the Corporation . New York, NY: 
Harper Business. 

Hammond, D,, & Mahesh, S. (1995). A Simulation and Analysis of Bank Teller Manning. 
In C. Alexopoulos, K. Kang, R. Lilegdon, & D. Goldsman (Eds.), Proceedings of 
the Winter Simulation Conference (pp. 1077-1080). Arlington, VA. 

Hashimoto, F., Bell, S., & Marshment, S. (1987). A Computer Simulation Program to 

Facilitate Budgeting and Staffing Decisions in an Intensive Care Unit. Critical Care 
Medicine. 15 (3), 256-259. 

Haynes, P. J. (1990). Hating to Wait: Managing the Final Service Encounter. Journal of 
Services Marketing, 4 (4), 20-26. 

Headley, D. E., & Miller, S. J. (1993). Measuring Service Quality and its Relationship to 
Future Consumer Behavior. Journal of Health Care Marketing, 13 (4), 32-41. 

Heskett, J. L. (1987). Lessons in the Service Sector. Harvard Business Review, 
87(M arch- April), 118-126. 

Hoover, S., V., & Perry, R. F. (1989). Simulation: A Problem Solving Approach . 
Reading, MA: Addison-Wesley Publishing Company. 



238 



Hornik, J. (1982). Situational Effects on the Consumption of Time. Journal of Marketing, 
46(Fall), 44-55. 

Hornik, J. (1984). Subjective vs. Objective Time Measures: A Note on the Perception of 
Time in Consumer Behavior. Journal of Consumer Research, 1 i Qune), 615-618. 

Hui, M. K., & Tse, D. K. (1996). What to Tell Consumers in Waits of Different Lengths: 
An Integrative Model of Service Evaluation. Journal of Marketing, 60 ( April), 81- 
90. 

Ishimoto, K., Ishimitsu, T., Koshiro, A., & Hirose, S. (1990). Computer Simulation of 
Optimum Personnel Assignment in Hospital Pharmacy using a Work-Sampling 
Method. Medicallnformatics, 15 (4), 343-354. 

Katz, K. L., Larson, B. M., & Larson, R. C. (1991), Prescription for the Waiting-in-Line 
Blues: Entertain, Enlighten, and Engage. Sloan Managment Review, 32 (Winter), 
44-53. 

Kiely, T. J. (1995). Managing Change: Why Reengineering Projects Fail. Harvard 
Business Review, (March- April), 15. 

Kippen, L.S., Strasser, S., Joshi, M. (1997) Improving the Quality of the NCQA Annual 
Member Health Care Survey Version 1.0. The American Journal of Managed 
Care, 3 (5), 719-730. 

Kleinrock, L. (1975). Volume I: Oueueing Systems . New York, NY: John Wiley & Sons. 

Konz, S. (1990). Work Design: Industrial Ergonomics . (3rd. ed.). Scottsdale, AZ: 
Pubhshing Horizons, Inc. 

Kotter, J. P. (1995). Leading Change: Why Transformation Efforts Fail. Harvard Business 
Review (March- April), 59-67. 

Krajewski, L. J., & Ritzman, L. P. (1990). Operations Management: Strategy and 
Analysis . (2nd ed). Reading, MA: Addison-Wesley Publishing Co. 

Kumar, A. P., & Kapur, R. (1989). Discrete Simulation Application - Scheduling Staff 
for the Emergency Room. In E. A. MacNair, K. J. Musselman, & P. Heidelberger 
(Eds.). Proceedings of the Winter Simulation Conference (pp. 1112-1120). 
Washington, D.C. 

Lamy, P. P., Kitler, M. E., Liebman, J. S., LeSage, P. J., Horn, S. M., & Boiling, T. V. 
(1970). Predicting Manpower Needs for an Outpatient Pharmacy. American 
Journal of Hospital Pharmacy, 27 ( April), 300-307. 

Larson, R. C. (1987). Perspectives on Queues: Social Justice and the Psychology of 
Queuing. Operations Research, 35 (November-December), 895-904, 



239 

Law, A. M., & Kelton, W. D. (1991). Simulation Modelling and Analysis . New York, 
NY: McGraw-Hill, Inc. 

Lehtinen, U., & Lehtinen, J. R. (1982). Service Quality: A Study of Quality Dimensions . 
Helsinki, Finland: Service Management Institute. 

Lilja, J. (1985). The Evaluation of Drug Information Programs. Social Science in 
Medicine, 21 (4), 407-414. 

Lovelock, C. H. (1980). Towards a Classification of Services. In L. L. Berry, G. L. 

Shostack, & G. D. Upah (Eds), Emerging Perspectives on Services Marketing 
(pp. 72-76). Chicago, IL: American Marketing Association. 

Lovelock, C. M. (1987). The Impact of Operations on Customers. In J. A. Czepiel, C. A. 
Congram, & J. Shanahan (Eds.), The Services Challenge: Integrating for 
Competitive Advantage (pp. 63-65). Chicago, IL: American Marketing 
Association. 

Mailhot, C, & Giacona-Dahl, N. S. (1987). Drug Information Services in Quebec: 

Determination of Community and Hospital Pharmacists' Needs. Drug Intellligence 
and Clinical Pharmacy. 2U Januarv), 57-63. 

Maister, D. (1985). The Psychology of Waiting Lines. In J. A. Czepiel, M. R. Solomon, & 
C. F. Suprenant (Eds.), The Service Encounter (pp. 113-123). Lexington, MA: 
Lexington Books. 

Maryanski, F. (1980). Digital Computer Simulation . Rochelle Park, NJ: Hayden Book 
Company, Inc. 

McAlexander, J. H. (1994). Service Quality Measurement. Journal of Health Care 
Marketing. 14 (3), 34-41. 

McClain, J. Q., & Thomas, L. J. (1985). Operations Management: Production of Goods 
and Services . (2nd ed.). Boston, MA: Prentice-Hall, Inc. 

McHugh, M. L. (1989). Computer Simulation as a Method of Selecting Nurse Staffing 

Levels in Hospitals. In E. A. MacNair, K. J. Musselman, & P. Heidelberger (Eds), 
Proceedings of the Winter Simulation Conference (pp. 1 121-1 129). Washington, 
DC. 

Mendenhall, W., Wackerly, D.D., & Scheaflfer, R.L. (1990). Mathematical Statistics with 
Applications. (4* ed). Boston, MA: PWS-KENT Publishing Company. 

Neelamkavil, F. (1987). Computer Simulation and Modelling . New York, NY: John Wiley 
& Sons. 



240 

Odeh, R. E., & Fox, M. (1975). Sample Size Choice: Charts for Experimens with Linear 
Models . New York, NY: Marcel Dekker, Inc. 

Oliver, R. L. (1993). A Conceptual Model of Service Quality and Service Satisfaction. In 
T. A. Swartz, D. E. Bowen, & S. W. Brown (Eds.), Advances in Services 
Marketing and Management (Vol. 2, pp. 65-85). Greenwich, CN: JAI Press, Inc. 

Ozeki, S., & Ikeuchi, N. (1992). Customer Service Evaluations in the Telephone Service 
Provisioning Process, Proceedings of the Winter Simulation Conference (pp. 1341- 
1548). 

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1994a), Alternative Scales for 

Measuring Service Quality: Comparative Assessment Based on Psychometric and 
Diagnostic Criteria. Journal of Retailing, 70 (3), 201-230. 

Parasuraman, A., Zeithaml, V.A., & Berry, L, L. (1985). A Conceptual Model of Service 
Quality and Its ImpUcations for Future Research. Journal of Marketing, 49 (Fall), 
41-50. 

Parasuraman, A., Zeithaml, V.A., & Berry, L. L. (1988). SERVQUAL: A Multi-Item 
Scale for Measuring Consumer Perceptions of Service Quality. Journal of 
Retailing, 64 (1), 12-40. 

Parasuraman, A., Zeithaml, V.A., & Berry, L. L. (1991). Refinement and Reassessment of 
the SERVQUAL Scale, Journal of Retailing, 67 (Winter), 420-450, 

Parker, P,F. (1965), The University of Kentucky Drug Information Center, American 
Journal of Hospital Pharmacv, 22, 42-7, 

Peter, J, P,, Churchill, G, A,, & Brown, T, J, (1993), Caution in theUse of Difference 
Scores in Consumer Research, Journal of Consumer Research, 19 ("March), 655- 
662, 

Reilly, T. A. (1978), A Delay-Scheduling Model for Patients Using a Walk-In Clinic, 
' Journal of Medical Svstems. 2 (4), 303-313, 

Rosenburg, J, M,, Fuentes, R, J., Starr, C. H., Kirschenbaum, H. L., & McGuire, H. 
(1995). Pharmacist-Operated Drug Information Centers in the United States. 
American Journal of Health- Svstems Pharmacv, 52 (Mav), 991-6. 

Rust, R, T,, Zahorik, A, J., & Keiningham, T, L. (1995), Return on Quality (ROQ): 

Making Service Quality Financially Accountable, Journal of Marketing, 5 9 ( April), 
58-70. 

Sargent, R, G. (1992). Validation and Verification of Simulation Models. In J. J. Swain, 
D. Goldsman, C. Grain, & J. R. Wilson (Eds), Proceedings of the Winter 
Simulation Conference (pp. 104-114). Piscataway, NJ. 



241 

Saunders, C. E., Makens, P. K., & Leblanc, L. J. (1989). Modeling Emergency 

Department Operations Using Advanced Computer Simulation Systems. Annals of 
Emergency Medicine. 18 (2), 134-140. 

Sawyer, A. G., & Ball, A. D. (1981). Statistical Power and Effect Size in Marketing 
Research. Journal of Marketing Research. XVIIK AuRust), 275-90. 

Schmele, J. A. (1993). Research and Total Quality. In A. F. Al-Assaf & J. A. Schmele 

(Eds), The Textbook of Total Quality in Healthcare . Delray Beach, FL: St. Lucie 
Press. 

Schriber, T. J. (1991). An Introduction to Simulation Using GPSS/H . New York, NY: 
John Wiley and Sons. 

Schwartz, B. (1975). Queuing and Waiting: Studies in the So cial Organization of Access 
and Delay . Chicago, IL: University of Chicago Press. 

Sella, A, F. (1992). Advanced Output Analysis for Simulation. In J. J. Swain, D. 

Goldsman, C. Crain, & J. R. Wilson (Eds.), Proceedings of the Winter Simulation 
Conference (pp. 190-197). Piscataway, NJ. 

Shostack, G. L. (1984). Designing Services That Deliver. Harvard Business Review. 
62( January-February), 133-139. 

Skoutakis, V. A. (1987). Drug Information Center Network: Need, Effectiveness, and 
Cost Justification. Drug Intelligence and Clinical Pharmacy. 21 ("January). 49-56. 

Smith, W. E. (1988). Excellence in the Management of Clinical Pharmacy Services. 
American Journal of Hospital Pharmacy. 45 (Feb), 319-25. 

Steenkamp, J-B. E. M. (1990). Conceptual Model of the Quality Perception Process. 
Journal of Business Research. 21 , 309-333. 

Stevens, J. (1996). AppUed Multivariate Statistics for the Social Sciences . (3rd ed.). 
Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers. 

Stewart, C. J., & William B. Cash, J. (1988). An Introduction to Interviewing, 

Interviewing Principles and Practices (5th ed., pp. 1-80). Dubuque, 10: Wm. C. 
Brown Publishers. 

Sumner, A. T,, & Hsieh, R. K. C. (1972). Long-Range Prediction of Examining Room 
Requirements. Heahh Services Research. 7 (Fall), 221-230. 

Swartz, T. A., & Brown, S. W. (1989). Consumer and Provider Expectation and 

Experience in Evaluating Professional Service Quality. Journal of th e Academy of 
Marketing Science. 17 (2). 189-195. 



242 

Taylor, S. (1994a). Waiting for Service: The Relationship Between Delays and 
Evaluations of Service. Journal of Marketing. 58 (2), 56-69. 

Taylor, S. A. (1994b). Distinguishing Service Quality from Patient Satisfaction in 
Developing Heahh Care Marketing Strategies. Hospital and Health Service s 
Administration, 39 (2), 221-236. 

Taylor, S. A., & Cronin, J. J. (1994). Modeling Patient Satisfaction and Service Quality. 
' ■Tournal of Health Care Marketing. 14 (1), 34-44. 

Taylor, S., & Claxton, J. D. (1994). Delays and the Dynamics of Service Evaluations. 
Journal of the Academy of Marketing Science, 22 (3), 254-264. 

Thompson, G. M. (1992). Improving the Utilization of Front Line Service Delivery 
System Personnel. Decision Sciences, 23 (5), 1072-1098. 

Tom, G., & Lucey, S. (1995). Waiting Time Delays and Customer Satisfaction in 
Supermarkets. Journal of Services Marketing, 9 (5), 20-29. 

Tumay, K. (1995). Business Process Simulation. In C. Alexopoulous, K. Kang, W. R. 
Lilegdon, & D. Goldsman (Eds.), Proceedings of the Proceedings of the Winter 
Simulation Conference (pp. 55-60). Arlington, VA. 

Vanscoy, G. J., Gajewski, L. K., Tyler, L. S., Gora-Harper, M. L., Grant, K. L., & May, 
J. R. (1996). The Future of Medication Information Practice: A Consensus. The 
Annals of Pharmacotherapv. 3 ( July/ August), 876-81. 

Westgard, J. O., & Barry, P. L. (1986). Cost-EfFective Quality Control: Managing the 
Quality and Productivity of Analytical Processes . Washington, DC: AACC Press. 

Winston, W. L. (1991). Operations Research: AppUcations and Algorithms . (2nd ed). 
Belmont, CA: Duxbury Press. 

Zeithaml, V.A., Berry, L. L., & Parasuraman, A. (1988). Communication and Control 

Processes in the Delivery of Service Quality. Journal of Marketing, 52 (April), 35- 
48. 

Zeithaml, V. A., Berry, L, L, & Parasuraman, A. (1996). The Behavioral Consequences 
of Service Quality. Journal of Marketing. 60 ( April), 31-46. 



BIOGRAPHICAL SKETCH 

Daniel Lee Halberg was born on October 19, 1969, in New London, Connecticut. 
He received a Bachelor of Science degree in business administration with a major in 
finance fi-om the University of Florida in December of 1991 . He entered the Ph.D. 
program in the department of Pharmacy Health Care Administration at the University of 
Florida in the spring of 1992. Besides serving as a graduate assistant for the department, 
Daniel also served as an adjunct faculty member for Santa Fe Community College (SFCC) 
in the department of Computer and Information Sciences. In addition, he worked as an 
independent consultant for the Center for Professional Development at SFCC specializing 
in computer appHcations training. 

D.L. Halberg' s current research interests involve efficient health care resource 
allocation decisions using techniques of pharmacoeconomics and operations research, with 
particular emphasis on drug use systems and computer simulation. 



243 



I certify that I have read this study and that in my opinion it conforms to acceptable standards of 



scholarly presentation and is fully adequate, in scope and quality, 
of Philosophy. 

c 




dissertation for the degree of Doctor 



Charles D.Tleple 

Professor of PharmacyHealth Care Administration 



I certify that I have read this study and that in my opinion it conforms to acceptable standards of 
scholarly presentation and is fully adequate, in scope and quality, as^ dissertatioiyf^r the degree of Doctor 
of Philosophy. 




Richard Segal 

Professor of Pharmacy Health Care Administration 



I certify that I have read this study and that in my opinion it conforms to acceptable standards of 
scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor 
of Philosophy. .. a k 

Earlene Lipowski fJ 

Associate Professor of Pharmacy Health Care Administration 

I certify that I have read this study and that in my opinion it conforms to acceptable standards of 
scholarly presentation and is fully adequate, in scope and quality, as a dissertation for thejdegree of Doctor 
of Philosophy. ^ 




Barney L, Cap^art . 

Professor of Industrial and Sy^ems Engineering 

I certify that I have read this study and that in my opinion it conforms to acceptable standards of 
scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor 
of Philosophy. 



hJJiO '< 




Ralph Swain 

Professor of Industrial and Systems Engineering 



This dissertation was submitted to the Graduate Faculty of the College of Pharmacy and to the 
Graduate School and was accepted as partial fulfillment of the requirements for the degree of Doctor of 
Philosophy. 



May 1998 




AZA/^t*"—^/// 



Dean, College of Pharmac 




Dean, Graduate School